Supreme Court remands social media moderation cases over First Amendment issues

Two state laws that could upend the way social media companies handle content moderation are still in limbo after a Supreme Court ruling sent the challenges back to lower courts, vacating previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment.

The cases stem from two state laws, from Texas and Florida, which tried to impose restrictions on social media companies’ ability to moderate content. The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. Meanwhile, the Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians. That law has also been on hold pending legal challenges.

Both laws were challenged by NetChoice, an industry group that represents Meta, Google, X and other large tech companies. NetChoice argued that the laws were unconstitutional and would essentially prevent large platforms from performing any kind of content moderation. The Biden Administration also opposed both laws. In a statement, NetChoice called the decision “a victory for First Amendment rights online.”

In a decision authored by Justice Elena Kagan, the court said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” But, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.”

Essentially, the usually-divided court agreed that the First Amendment implications of the laws could have broad impacts on parts of these sites unaffected by algorithmic sorting or content moderation (like direct messages, for instance) as well as on speech in general. Analysis of those externalities, Kagan wrote, simply never occurred in the lower court proceedings. The decision to remand means that analysis should take place, and the case may come back before SCOTUS in the future.

“In sum, there is much work to do below on both these cases … But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved,” Kagan wrote. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-remands-social-media-moderation-cases-over-first-amendment-issues-154001257.html?src=rss

Detroit police can no longer use facial recognition results as the sole basis for arrests

The Detroit Police Department has to adopt new rules curbing its reliance on facial recognition technology after the city reached a settlement this week with Robert Williams, a Black man who was wrongfully arrested in 2020 due to a false face match. It’s not an all-out ban on the technology, though, and the court’s jurisdiction to enforce the agreement only extends four years. Under the new restrictions, which the ACLU is calling the strongest such policies for law enforcement in the country, police cannot make arrests based solely on facial recognition results or conduct a lineup based only on facial recognition leads.

Williams was arrested after facial recognition technology flagged his expired driver’s license photo as a possible match for the identity of an alleged shoplifter, which police then used to construct a photo lineup. He was arrested at his home, in front of his family, which he says “completely upended my life.” Detroit PD is known to have made at least two other wrongful arrests based on the results of facial recognition technology (FRT), and in both cases, the victims were Black, the ACLU noted in its announcement of the settlement. Studies have shown that facial recognition is more likely to misidentify people of color.

The new rules stipulate that “[a]n FRT lead, combined with a lineup identification, may never be a sufficient basis for seeking an arrest warrant,” according to a summary of the agreement. There must also be “further independent and reliable evidence linking a suspect to a crime.” Police in Detroit will have to undergo training on the technology that addresses the racial bias in its accuracy rates, and all cases going back to 2017 in which facial recognition was used to obtain an arrest warrant will be audited.

In an op-ed for TIME published today, Williams wrote that the agreement means, essentially, that “DPD can no longer substitute facial recognition for basic investigative police work.”

This article originally appeared on Engadget at https://www.engadget.com/detroit-police-can-no-longer-use-facial-recognition-results-as-the-sole-basis-for-arrests-204454537.html?src=rss

EU competition chief jabs at Apple from both sides over AI delay

It's safe to say Apple and the European Commission aren't exactly bosom buddies. The two sides have been at loggerheads over Apple's compliance — or alleged lack thereof — with the European Union's Digital Markets Act (DMA), a law designed to rein in the power of major tech companies.

Apple said last week it would delay the rollout of certain features in the European Union, including Apple Intelligence AI tools, over concerns "that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security." As it turns out, the EU is not exactly happy about that decision.

The call to push back the rollout of Apple Intelligence in the EU is a "stunning, open declaration that they know 100 percent that this is another way of disabling competition where they have a stronghold already,” EU competition commissioner Margrethe Vestager said at a Forum Europa event, according to Euractiv. Vestager added that the “short version of the DMA” means companies have to be open for competition to keep operating in the region.

Not to leap to the defense of Apple here, but these comments are sure to raise an eyebrow or two, especially after Vestager also said she "was personally quite relieved that I would not get an AI-updated service on my iPhone." Apple does intend to bring Apple Intelligence to Europe more broadly, but it's taking a cautious approach with the tech in that region due to "regulatory uncertainties" and ensuring it won't have to compromise on user safety.

As it stands, the European Commission is carrying out multiple investigations into the company over possible violations of the DMA. This week, it accused Apple of violating the law's anti-steering provisions by blocking app developers from freely informing users about alternate payment options outside of the company's ecosystem. If it's found guilty, Apple could be on the hook for a fine of up to 10 percent of its global annual revenue. Based on its 2023 sales, that could be a penalty of up to $38 billion. The percentage of the fine can double for repeated violations.

Earlier this year, before the DMA came into force, the European Commission fined Apple €1.8 billion ($1.95 billion) over a violation of previous anti-steering rules. According to the Commission, Apple prevented rival music streaming apps from telling users that they could pay less for subscriptions if they sign up outside of iOS apps. Apple has challenged the fine.

This article originally appeared on Engadget at https://www.engadget.com/eu-competition-chief-jabs-at-apple-from-both-sides-over-ai-delay-140022585.html?src=rss

The nation’s oldest nonprofit newsroom is suing OpenAI and Microsoft

The Center for Investigative Reporting, the nation’s oldest nonprofit newsroom that produces Mother Jones and Reveal sued OpenAI and Microsoft in federal court on Thursday for allegedly using its content to train AI models without consent or compensation. This is the latest in a long line of lawsuits filed by publishers and creators accusing generative AI companies of violating copyright.

“OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,” said Monika Bauerlein, CEO of the Center for Investigative Reporting, in a statement. “This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it.” Bauerlein said that OpenAI and Microsoft treat the work of nonprofit and independent publishers “as free raw material for their products," and added that such moves by generative AI companies hurt the public’s access to truthful information in a “disappearing news landscape.”

OpenAI and Microsoft did not respond to a request for comment by Engadget.

The CIR’s lawsuit, which was filed in Manhattan’s federal court, accuses OpenAI and Microsoft, which owns nearly half of the company, of violating the Copyright Act and the Digital Millennium Copyright Act multiple times.

News organizations find themselves at an inflection point with generative AI. While the CIR is joining publishers like The New York Times, New York Daily News, The Intercept, AlterNet and Chicago Tribune in suing OpenAI, others publishers have chosen to strike licensing deals with the company. These deals will allow OpenAI to train its models on archives and ongoing content published by these publishers and cite information from them in responses offered by ChatGPT.

On the same day as the CIR sued OpenAI, for instance, TIME magazine announced a deal with the company that would grant it access to 101 years of archives. Last month, OpenAI signed a $250 million multi-year deal with News Corp, the owner of The Wall Street Journal, to train its models on more than a dozen brands owned by the publisher. The Financial Times, Axel Springer (the owner of Politico and Business Insider), The Associated Press and Dotdash Meredith have also signed deals with OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/the-nations-oldest-nonprofit-newsroom-is-suing-openai-and-microsoft-174748454.html?src=rss

Supreme Court ruling may allow officials to coordinate with social platforms again

The US Supreme Court has ruled on controversial attempt by two states, Missouri and Louisiana, to limit Biden Administration officials and other government agencies from engaging with workers at social media companies about misinformation, election interference and other policies. Rather than set new guidelines on acceptable communication between these parties, the Court held that the plaintiffs lacked standing to bring the issue at all. 

In Murthy, the states (as well as five individual social media users) alleged that, in the midst of the COVID pandemic and the 2020 election, officials at the CDC, FBI and other government agencies "pressured" Meta, Twitter and Google "to censor their speech in violation of the First Amendment."

The Court wrote, in an opinion authored by Justice Barrett, that "the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden." She went on to describe this as "a tall order." 

Though a Louisiana District Court order blocking contact between social media companies and Biden Administration officials has been on hold, the case has still had a significant impact on relationships between these parties. Last year, Meta revealed that its security researchers were no longer receiving their usual briefings from the FBI or CISA (Cybersecurity and Infrastructure Security Agency) regarding foreign election interference. FBI officials had also warned that there were instances in which they discovered election interference attempts but didn’t warn social media companies due to additional layers of legal scrutiny implemented following the lawsuit. With today's ruling it seems possible such contact might now be allowed to continue. 

In part, it seems the Court was reluctant to rule on the case because of the potential for far-reaching First Amendment implications. Among the arguments made by the Plaintiffs was an assertion of a "right to listen" theory, that social media users have a Constitutional right to engage with content. "This theory is startlingly broad," Barrett wrote, "as it would grant all social-media users the right to sue over someone else’s censorship." The opinion was joined by Justices Roberts, Sotomayor, Kagan, Kavanaugh and Jackson. Justice Alito dissented, and was joined by Justices Thomas and Gorsuch. 

The case was one of a handful involving free speech and social media to come before the Supreme Court this term. The court is also set to rule on two linked cases involving state laws from Texas and Florida that could upend the way social media companies handle content moderation.

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-ruling-may-allow-officials-to-coordinate-with-social-platforms-again-144045052.html?src=rss

Verizon will pay a $1 million fine to settle a 911 outage investigation

Verizon has agreed to pay a $1.05 million penalty to settle a Federal Communications Commission investigation into whether the company broke the agency's rules after a 911 outage. Over a period of one hour and 44 minutes in December 2022, the outage prevented hundreds of emergency calls from going through in Alabama, Florida, Georgia, North Carolina, South Carolina and Tennessee, the FCC said.

The agency added that the outage was akin to one that occurred two months earlier. Although Verizon carried out mitigation efforts to help prevent similar outages to the one in October 2022, "certain failures recurred," according to the FCC. As part of the settlement, Verizon has committed to implementing a compliance plan to make sure it abides by the FCC's 911 rules and to adhere to best practices, which include risk assessments and security-related measures.

“When you call 911 in an emergency, it’s critical that your call goes through,” FCC Chairwoman Jessica Rosenworcel said in a statement. “Today’s action is part of the FCC’s ongoing effort to ensure that the public has reliable communications, including access to 911.”

This article originally appeared on Engadget at https://www.engadget.com/verizon-will-pay-a-1-million-fine-to-settle-a-911-outage-investigation-123052358.html?src=rss

Record labels sue AI music generators for ‘massive infringement of recorded music’

Major music labels are taking on AI startups that they believe trained on their songs without paying. Universal Music Group, Warner Music Group and Sony Music Group sued the music generators Suno and Udio for allegedly infringing on copyrighted works on a “massive scale.”

The Recording Industry Association of America (RIAA) initiated the lawsuits and wants to establish that “nothing that exempts AI technology from copyright law or that excuses AI companies from playing by the rules.”

The music labels’ lawsuits in US federal court accuse Suno and Udio of scraping their copyrighted tracks from the internet. The filings against the AI companies reportedly demand injunctions against future use and damages of up to $150,000 per infringed work. (That sounds like it could add up to a monumental sum if the court finds them liable.) The suits appear aimed at establishing licensed training as the only acceptable industry framework for AI moving forward — while instilling fear in companies that train their models without consent.

Screenshot of the Udio AI music generator homescreen.
Udio

Suno AI and Udio AI (Uncharted Labs run the latter) are startups with software that generates music based on text inputs. The former is a partner of Microsoft for its CoPilot music generation tool. The RIAA claims the services’ reproduced tracks are uncannily similar to existing works to the degree that they must have been trained on copyrighted songs. It also claims the companies didn’t deny that they trained on copyright works, instead shielding themselves behind their training being “confidential business information” and standard industry practices.

According to The Wall Street Journal, the lawsuits accuse the AI generators of creating songs that sounded remarkably similar to The Temptations’ “My Girl,” Green Day’s “American Idiot,” and Mariah Carey’s “All I Want for Christmas Is You,” among others. They also claim the AI services produced indistinguishable vocals from artists like Lin-Manuel Miranda, Bruce Springsteen, Michael Jackson and ABBA.

Wired reports that one example cited in the lawsuit details how one of the AI tools reproduced a song that sounded nearly identical to Chuck Berry’s pioneering classic “Johnny B. Goode,” using the prompt, “1950s rock and roll, rhythm & blues, 12 bar blues, rockabilly, energetic male vocalist, singer guitarist,” along with some of Berry’s lyrics. The suit claims the generator almost perfectly generated the original track’s “Go, Johnny, go, go” chorus.

Screenshot for the Suno AI webpage.
Suno

To be clear, the RIAA isn’t advocating based on the principle that all AI training on copyrighted works is wrong. Instead, it’s saying it’s illegal to do so without licensing and consent, i.e., when the labels (and, likely to a lesser degree, the artists) don’t make any money off of it.

The recording industry is working on AI deals of its own that license music in a way that it believes is fair for its bottom line. These include an agreement between Universal and SoundLabs, which allows the latter to create vocal models for artists while still allowing the singers to control ownership and output. The label also partnered with YouTube on an AI licensing and royalties deal. Universal also represents Drake, whose diss track against Kendrick Lamar from earlier this year used AI-generated copies of Tupac Shakur and Snoop Dogg’s voices.

“There is room for AI and human creators to forge a sustainable, complementary relationship,” the filing against Suno reads. “This can and should be achieved through the well-established mechanism of free-market licensing that ensures proper respect for copyright owners.”

According to Bloomberg, Suno co-founder Mikey Shulman said in April that the company’s practices are “legal” and “fairly in line with what other people are doing.” The AI industry at large appears to be attempting to race towards a threshold where its tools are considered too ubiquitous to be held accountable before anyone can do anything about how it trained its models.

“We work very closely with lawyers to make sure that what we’re doing is legal and industry standard,” Suno’s founder said in April. “If the law changes, obviously we would change our business one way or the other.”

This article originally appeared on Engadget at https://www.engadget.com/record-labels-sue-ai-music-generators-for-massive-infringement-of-recorded-music-172915925.html?src=rss

Five men face jail time for running the illegal streaming service Jetflicks

The illegal streaming service Jetflicks once boasted on its website that visitors could watch just about any TV show or movie “Anytime. Anywhere.” Now the five people behind the bootleg streaming service are facing some serious jail time.

A jury found Kristopher Dallman, Douglas Courson, Felipe Garcia, Jared Jaurequi and Peter Huber guilty in a Las Vegas federal court on Friday for conspiracy to commit criminal copyright infringement. Dallmann was also found guilty on two counts of money laundering and three counts of misdemeanor criminal copyright infringement for leading the Jetflicks operation, according to court documents and a US Department of Justice press release.

Jetflicks used computer scripts and software to scour the internet for illegal copies of movies and television shows and posted hundreds of thousands of illegal copies as far back as 2007 from torrent and Usenet sites. The defendants created a catalog of bootleg shows and movies bigger than the combined collections of streaming services including Netflix, Hulu, Vudu and Amazon Prime, according to the Department of Justice.

Users could pay a subscription fee to access the site on pretty much any media streaming device with a web browser. Jetflicks claimed to “offer more than 183,200 television episodes and have more than 37,000 subscribers,” according to the initial indictment filed in the Eastern District of Virginia in 2019.

Dallmann, the leader of the group, and his co-conspirators “made millions of dollars streaming and distributing this catalog of stolen content,” according to the press release.

At one point, operators and employees of Jetflicks were making hundreds of thousands of dollars a year from its subscription service. Dallman wrote in an online chat that his site made $750,000 in one year, according to the indictment.

The Motion Picture Association of America (MPAA) took notice of Jetflicks in 2012 and sent cease and desist letters to the site’s operators. Four years later, the Federal Bureau of Investigation (FBI) started its undercover operation of the site by paying for a six-month subscription. Undercover agents recorded multiple instances of illegal uploads of shows like Shameless, Ray Donovan, The OA and SyFy’s 12 Monkeys alongside charges for accessing them. Then the agents traced those charges back to the defendants’ bank accounts, according to court records.

A sentencing hearing has yet to be scheduled. The Department of Justice says Dallman could face up to 48 years in prison and the four remaining defendants could each face five years in prison.

This article originally appeared on Engadget at https://www.engadget.com/five-men-face-jail-time-for-running-the-illegal-streaming-service-jetflicks-202758485.html?src=rss

Apple will reportedly withhold new AI features in Europe due to regulations

Apple reportedly said on Friday that it would delay iOS 18’s marquee AI features in the European Union, conveniently blaming Digital Markets Act (DMA) regulations. The company claimed it would block the launch of Apple Intelligence, iPhone Mirroring on the Mac and SharePlay Screen Sharing in the EU this year, according to Bloomberg, which reported the news.

“We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security,” the company said in a statement to Bloomberg. Apple didn’t expand on how DMA regulations could force it to compromise user privacy and security.

The DMA, which passed in 2022, tries to usher in fair competition by reining in what Big Tech companies can do to stifle competition. It blocks them from pushing out smaller competitors, favoring their own services over those of rivals, locking customers’ data into their platform and limiting transparency about their use of advertising data.

This isn’t the first time Apple has pinned blame on regulations — without offering much in the way of specifics — for blocking EU users from having nice things. Earlier this year, the company said it would remove the ability to add home screen web apps in Europe due to DMA rules. It later reversed course, citing “requests” it received. Google did something similar when it removed third-party apps and watch faces from European devices, blaming “new regulatory requirements.”

Apple’s delay comes when EU regulations present a thorn in the company’s side. The European Commission formally opened an investigation into the company in March and reportedly plans to charge it in the coming weeks for DMA violations. The company was already fined €1.8 billion ($1.95 billion) earlier this year for preventing app developers from informing iOS users about cheaper music subscription plans outside of the company’s ecosystem.

This article originally appeared on Engadget at https://www.engadget.com/apple-will-reportedly-withhold-new-ai-features-in-europe-due-to-regulations-183313640.html?src=rss

How small claims court became Meta’s customer service hotline

Last month, Ray Palena boarded a plane from New Jersey to California to appear in court. He found himself engaged in a legal dispute against one of the largest corporations in the world, and improbably, the venue for their David-versus-Goliath showdown would be San Mateo's small claims court.

Over the course of eight months and an estimated $700 (mostly in travel expenses), he was able to claw back what all other methods had failed to render: his personal Facebook account.

Those may be extraordinary lengths to regain a digital profile with no relation to its owner's livelihood, but Palena is one of a growing number of frustrated users of Meta's services who, unable to get help from an actual human through normal channels of recourse, are using the court system instead. And in many cases, it's working.

Engadget spoke with five individuals who have sued Meta in small claims court over the last two years in four different states. In three cases, the plaintiffs were able to restore access to at least one lost account. One person was also able to win financial damages and another reached a cash settlement. Two cases were dismissed. In every case, the plaintiffs were at least able to get the attention of Meta’s legal team, which appears to have something of a playbook for handling these claims.

At the heart of these cases is the fact that Meta lacks the necessary volume of human customer service workers to assist those who lose their accounts. The company’s official help pages steer users who have been hacked toward confusing automated tools that often lead users to dead-end links or emails that don’t work if your account information has been changed. (The company recently launched a $14.99-per-month program, Meta Verified, which grants access to human customer support. Its track record as a means of recovering hacked accounts after the fact has been spotty at best, according to anecdotal descriptions.)

Hundreds of thousands of people also turn to their state Attorney General’s office as some state AGs have made requests on users’ behalf — on Reddit, this is known as the “AG method.” But attorneys general across the country have been so inundated with these requests they formally asked Meta to fix their customer service, too. “We refuse to operate as the customer service representatives of your company,” a coalition of 41 state AGs wrote in a letter to the company earlier this year.

Facebook and Instagram users have long sought creative and sometimes extreme measures to get hacked accounts back due to Meta’s lack of customer support features. Some users have resorted to hiring their own hackers or buying an Oculus headset since Meta has dedicated support staff for the device (users on Reddit report this “method” no longer works). The small claims approach has become a popular topic on Reddit forums where frustrated Meta users trade advice on various “methods” for getting an account back. People Clerk, a site that helps people write demand letters and other paperwork required for small claims court, published a help article called “How to Sue facebook,” in March.

It’s difficult to estimate just how many small claims cases are being brought by Facebook and Instagram users, but they may be on the rise. Patrick Forrest, the chief legal officer for Justice Direct, the legal services startup that owns People Clerk, says the company has seen a “significant increase” in cases against Meta over the last couple years.

One of the advantages of small claims court is that it’s much more accessible to people without deep pockets and legal training. Filing fees are typically under $100 and many courthouses have resources to help people complete the necessary paperwork for a case. “There's no discovery, there are no depositions, there's no pre-trial,” says Bruce Zucker, a law professor at California State University, Northridge. “You get a court date and it's going to be about a five or 10 minute hearing, and you have a judge who's probably also tried to call customer service and gotten nowhere.”

“Facebook and Instagram and WhatsApp [have] become crucial marketplaces where people conduct their business, where people are earning a living," Forrest said. “And if you are locked out of that account, business or personal, it can lead to severe financial damages, and it can disrupt your ability to sustain your livelihood.”

One such person whose finances were enmeshed with Meta's products is Valerie Garza, the owner of a massage business. She successfully sued the company in a San Diego small claims court in 2022 after a hack which cost her access to personal Facebook and Instagram accounts, as well as those associated with her business. She was able to document thousands of dollars in resulting losses.

A Meta legal representative contacted Garza a few weeks before her small claims court hearing, requesting she drop the case. She declined, and when Meta didn’t show up to her hearing, she won by default. "When we went through all of the loss of revenues," Garza told Engadget, "[the judge] kind of had to give it to me.”

But that wasn’t the end of Garza’s legal dispute with Meta. After the first hearing, the company filed a motion asking the judge to set aside the verdict, citing its own failure to appear at the hearing. Meta also tried to argue that its terms of service set a maximum of $100 liability. Another hearing was scheduled and a lawyer again contacted Garza offering to help get her account back.

“He seemed to actually kind of just want to get things turned back on, and that was still my goal, at this point,” Garza said. It was then she discovered that her business’ Instagram was being used to advertise sex work.

She began collecting screenshots of the activity on the account, which violated Instagram’s terms of service, as well as fraudulent charges for Facebook ads bought by whoever hacked her account. Once again, Meta didn’t show up to the hearing and a judge ordered the company to pay her the $7,268.65 in damages she had requested.

“I thought they were going to show up this time because they sent their exhibits, they didn't ask for a postponement or anything,” she says. “My guess is they didn't want to go on record and have a transcript showing how completely grossly negligent they are in their business and how very little they care about the safety or financial security of their paying advertisers.”

In July of 2023, Garza indicated in court documents that Meta had paid in full. In all, the process took more than a year, three court appearances and countless hours of work. But Garza says it was worth it. “I just can't stand letting somebody take advantage and walking away,” she says.

Even for individuals whose work doesn't depend on Meta's platforms, a hacked account can result in real harm.

Palena, who flew cross-country to challenge Meta in court, had no financial stake in his Facebook account, which he claimed nearly 20 years ago when the social network was still limited to college students. But whoever hacked him had changed the associated email address and phone number, and began using his page to run scam listings on Facebook Marketplace.

“I was more concerned about the damage it could do to me and my name if something did happen, if someone actually was scammed,” he tells Engadget. In his court filing, he asked for $10,000 in damages, the maximum allowed in California small claims court. He wrote that Meta had violated its own terms of service by allowing a hacked account to stay up, damaging his reputation. “I didn't really care that much about financial compensation,” Palena says “I really just wanted the account back because the person who hacked the account was still using it. They were using my profile with my name and my profile image."

A couple weeks later, a legal rep from Meta reached out to him and asked him for information about his account. They exchanged a few emails over several weeks, but his account was still inaccessible. The same day he boarded a plane to San Mateo, the Meta representative emailed him again and asked if he would be willing to drop the case since “the access team is close to getting your account secure and activated again.” He replied that he intended to be in court the next day as he was still unable to get into his account.

Less than half an hour before his hearing was scheduled to start, he received the email he had spent months waiting for: a password reset link to get back into his account. Palena still attended the hearing, though Meta did not. According to court records reviewed by Engadget, Palena told the judge the case had been “tentatively resolved,” though he hasn’t officially dropped the case yet.

While filing a small claims court case is comparatively simple, it can still be a minefield, even to figure out something as seemingly straightforward as which court to file to. Forrest notes that Facebook’s terms of service stipulates that legal cases must be brought in San Mateo County, home of Meta’s headquarters. But, confusingly, the terms of service for Meta accounts states that cases other than small claims court must be filed in San Mateo. In spite of the apparent contradiction, some people (like Garza) have had success suing Meta outside of San Mateo.

Each jurisdiction also has different rules for maximum allowable compensation in small claims, what sorts of relief those courts are able to grant and even whether or not parties are allowed to have a lawyer present. The low barrier to entry means many first-time plaintiffs are navigating the legal system for the first time without help, and making rookie mistakes along the way.

Shaun Freeman had spent years building up two Instagram accounts, which he describes as similar to TMZ but with “a little more character.” The pages, which had hundreds of thousands of followers, had also been a significant source of income to Freeman, who has also worked in the entertainment industry and uses the stage name Young Platinum.

He says his pages had been suspended or disabled in the past, but he was able to get them back through Meta’s appeals process, and once through a complaint to the California Attorney General’s office. But in 2023 he again lost access to both accounts. He says one was disabled and one is inaccessible due to what seems like a technical glitch.

He tried to file appeals and even asked a friend of a friend who worked at Meta to look into what had happened, but was unsuccessful. Apparently out of other options, he filed a small claims case in Nevada in February. A hearing was scheduled for May, but Freeman had trouble figuring out the legal mechanics. “It took me months and months to figure out how to get them served,” Freeman says. He was eventually able to hire a process server and got the necessary signature 10 days before his hearing. But it may have been too late. Court records show the case was dismissed for failure to serve.

Even without operator error, Meta seems content to create hardship for would-be litigants over matters much smaller than the company's more headline-grabbing antitrust and child safety disputes. Based on correspondence reviewed by Engadget, the company maintains a separate "small claims docket" email address to contact would-be litigants.

Ron Gaul, who lives in North Dakota, filed a small claims suit after Meta disabled his account following a wave of what he describes as targeted harassment. The case was eventually dismissed after Meta’s lawyers had the case moved to district court, which is permissible for a small claims case under North Dakota law.

Gaul says he couldn’t keep up with the motions filed by Meta’s lawyers, whom he had hoped to avoid by filing in small claims court. “I went to small claims because I couldn't have a lawyer,” he tells Engadget.

Ryan, an Arizona real estate agent who asked to be identified by his first name only, decided to sue Meta in small claims with his partner after their Facebook accounts were disabled in the fall of 2022. They were both admins of several large Facebook Groups and he says their accounts were disabled over a supposed copyright violation.

Before a scheduled hearing, the company reached out. “They started basically trying to bully us,” says Ryan, who asked to be identified by his first name only. “They started saying that they have a terms of service [and] they can do whatever they want, they could delete people for any reason.” Much like Gaul, Ryan expected small claims would level the playing field. But according to emails and court records reviewed by Engadget, Meta often deploys its own legal resources as well as outside law firms to respond to these sorts of claims and engage with small claims litigants outside of court. "They put people that still have legal training against these people that are, you know, representing themselves,” he said.

In the end, Meta’s legal team was able to help Ryan get his account back and he agreed to drop himself from the small claims case. But two months later his partner had still not gotten back into hers. Meta eventually told her that her account had been permanently deleted and was no longer able to be restored. Meta eventually offered $3,500 — the maximum amount for a small claims case in Arizona. He says they wanted more, but Meta refused, and they felt like they were out of options. Ryan claims they had already lost tens of thousands of dollars in potential sales that they normally sourced from Facebook. “We were prepared to go further, but no lawyer would really take it on without a $15,000 retainer and it wasn't worth it.”

While it may seem surprising that Meta would give these small claims cases so much attention, Zucker, the Cal State Northridge professor, says that big companies have their own reasons for wanting to avoid court. “I don’t think places like Google or Meta want to have a bunch of judgments against them … because then that becomes a public record and starts floating around,” he says. “So they do take these things seriously.”

Without responding to specific questions about the substance of this story, Meta instead sent Engadget the following statement:

"We know that losing and recovering access to your online accounts can be a frustrating experience. We invest heavily in designing account security systems to help prevent account compromise in the first place, and in educating our users, including by regularly sharing new security features and tips for how people can stay safe and vigilant against potential targeting by hackers. But we also know that bad actors, including scammers, target people across the internet and constantly adapt to evade detection by social media platforms like ours, email and telecom providers, banks and others. To detect malicious activity and help protect people who may have gotten compromised via email phishing, malware or other means, we also constantly improve our detection, enforcement and support systems, in addition to providing channels where people can report account access issues to us, working with law enforcement and taking legal action against malicious groups."

This article originally appeared on Engadget at https://www.engadget.com/how-small-claims-court-became-metas-customer-service-hotline-160224479.html?src=rss