X lawsuit accuses Media Matters of running a campaign to drive advertisers away

X has filed a lawsuit against media watchdog group Media Matters over the latter's research that showed ads on the social network appearing next to antisemitic content. The company's owner, Elon Musk, promised to file a "thermonuclear lawsuit" against the organization late last week following an advertiser exodus. In its complaint, X said Media Matters "knowingly and maliciously manufactured side-by-side images depicting advertisers' posts on X Corp.'s social media platform beside Neo-Nazi and white national fringe content." It added that the group portrayed the "manufactured images" as if they represented the typical user's experience in the platform. "Media Matters designed both these images and the resulting media strategy to drive advertisers from the platform and destroy X Corp," the company wrote. 

As TechCrunch notes, though, Media Matters didn't exactly "manufacture" the images it used with its research. Based on X's own investigation as it detailed in its lawsuit, the organization used an account older than 30 days to bypass the website's ad filters to follow a set of users known to produce "extreme, fringe content" along with the biggest advertisers on the platform. The group then allegedly kept on scrolling and refreshing its feed to generate "between 13 to 15 times more advertisements per hour than viewed by the average X user." X said the watchdog didn't provide any context regarding the "forced, inauthentic nature" of the advertisements it saw."

In a response to Media Matters' research, X CEO Linda Yaccarino said "not a single authentic user on X saw IBM's, Comcast's, or Oracle's ads next to the content in Media Matters' article." She added that "only two users saw Apple's ad next to the content, at least one of which was Media Matters," confirming that the organization did see the advertisements, even if it had to create the right conditions for them. After Yaccarino released her statement, Media Matters head Angelo Carusone retweeted several posts from seemingly authentic users showing ads for searches and tags such as "killjews" and "HeilHitler." We reached out to the organization about the lawsuit, and a spokesperson told Engadget: "This is a frivolous lawsuit meant to bully X's critics into silence. Media Matters stands behind its reporting and looks forward to winning in court."

Aside from X's lawsuit, Media Matters also has to grapple with an investigation by Ken Paxton, the Attorney General of Texas. Paxton said his office is looking into Media Matters, which he called "a radical anti-free speech" organization, for potential fraudulent activity. He said he's investigating the watchdog to "ensure that the public has not been deceived by the schemes of radical left-wing organizations who would like nothing more than to limit freedom by reducing participation in the public square."

The media watchdog had published its findings after X owner Elon Musk responded to a tweet that said Jews pushed "hatred against whites that they claim to want people to stop using against them." Musk wrote: "You have said the actual truth." Several big-name advertisers had pulled their campaigns from the platform following the incidents, including IBM, Apple, Disney, Paramount and Comcast. Meanwhile, Lionsgate specifically cited Elon's tweet as the reason for pulling its ads. 

According to Fortune, Yaccarino held an all-hands meeting after X filed the lawsuit to confirm to staff members that some customers' advertisements are still on pause. When asked about what the best outcome for the lawsuit would be, the CEO said a win would validate that X was right.

"They have a long history of being an activist organization, to force a narrative and not allow people of the world to make their own decisions," she reportedly responded. "I think one of the main goals that underscores the dedication to truth and fairness and that is that we allow people a global Town Square, to seek out their own information and make their own decisions. So exposing Media Matters to train people’s rights to make their own decisions will be a validation that X was right, and this was an inauthentic manipulation."

Update, November 21, 2023, 12:14AM ET: Added information from Fortune's report about X's all-hands meeting. 

This article originally appeared on Engadget at https://www.engadget.com/x-lawsuit-accuses-media-matters-of-running-a-campaign-to-drive-advertisers-away-from-its-website-040022933.html?src=rss

Apple joins Meta and ByteDance in contesting the EU’s ‘gatekeeper’ designation

Apple has joined Meta and TikTok owner ByteDance in contesting their platforms’ definitions as part of the EU’s Digital Markets Act (DMA). The legislation allows regulators to designate dominant companies’ services or platforms as “gatekeepers,” or big and powerful enough to act as a bottleneck between businesses and customers, which it can then fine for prohibited behavior. It currently targets 22 gatekeeper services run by six Big Tech companies (Apple, Microsoft, Alphabet’s Google, Meta, Amazon and ByteDance’s TikTok). The law encourages consumer-friendly competition, preventing businesses from imposing unfair conditions on customers.

The EU Court of Justice (via Reuters) posted on X Friday about Apple’s formal objection, announcing that the iPhone maker had joined Meta and ByteDance in contesting its decisions. Although the complaint details aren’t public, Bloomberg News reported last week that Apple would challenge the gatekeeper designations of both the App Store and iMessage. The company said this week it would soon support RCS on iPhone, potentially removing one of the EU’s bones to pick with iMessage consumer lock-in. 

Microsoft and Google have reportedly accepted their DMA designations, while Meta and ByteDance contested theirs. Meta specifically questioned Messenger and Marketplace’s gatekeeper labels, seeking to clarify why they were included. (Meta didn’t challenge Facebook, Instagram and WhatsApp’s inclusion.) The company argued Marketplace is a consumer-to-consumer service and Messenger is a chat feature on Facebook, not an online intermediary.

Meanwhile, ByteDance argues that TikTok is a challenger in the social market rather than an established gatekeeper. It claimed designating its platform as such would only serve to protect more established companies.

Like the Digital Services Act (DSA), the DMA has significant teeth. Companies failing to comply can face fines of up to 10 percent of their global turnover, up to 20 percent for repeat offenders and periodic fines of up to five percent of their average daily turnover. Other penalties, including the divestiture of parts of a business, could also be included following market investigations.

This article originally appeared on Engadget at https://www.engadget.com/apple-joins-meta-and-bytedance-in-contesting-the-eus-gatekeeper-designation-165915809.html?src=rss

Google, Meta and other social media companies will be forced to defend teen addiction lawsuits

US District Judge Yvonne Gonzalez Rogers has ruled that the companies that own and run the most popular social networks today will have the face lawsuits blaming them for teenagers' social media addiction. According to Bloomberg Law, that means Google, which owns YouTube, Meta which runs Facebook and Instagram, ByteDance, which owns TikTok, and Snap can't get out of hundreds of federal lawsuits filed against them over the past couple of years. 

Rogers, who'll be overseeing the cases, disagreed with the companies' argument that they're not liable for personal injury claims under the First Amendment and Section 230 of the Communications Decency Act. Section 230 protects publishers from what their users post on their platforms, but the judge said the lawsuits cover more than just third-party content. Further, she said the companies had failed to explain why they shouldn't be held responsible for other complaints against them, including defective parental controls, the failure to implement effective age verification systems and adding barriers to the account deactivation process. At the same time, the just dismissed some of the complaints, such as the ones suing the companies for failing to limit certain kinds of content. 

The lawsuits in questions were filed on behalf of minors across the country. In 2022, a mother from Connecticut sued Meta and Snap, accusing them of causing an addiction in her 11-year-old daughter who took her own life. In October this year, Meta was sued by 41 states as well as the District of Columbia, accusing the company of knowing that its "addictive" features were harmful to children and teens. Companies like Meta have been facing increased scrutiny over the past couple of years after former employee Frances Haugen revealed an internal Facebook research that found Instagram to be "harmful for a sizable percentage of teens." 

Google spokesperson José Castañeda told Bloomberg Law that protecting children has always been core to the company's work. "In collaboration with child development specialists, we have built age-appropriate experiences for kids and families on YouTube, and provide parents with robust controls," he added. "The allegations in these complaints are simply not true." A TikTok spokesperson gave Reuters a similar statement and said the app has "robust safety policies and parental controls."

This article originally appeared on Engadget at https://www.engadget.com/google-meta-and-other-social-media-companies-will-be-forced-to-defend-teen-addiction-lawsuits-081727526.html?src=rss

Google sues scammers that allegedly released a malware-filled Bard knockoff

The hype surrounding emerging technologies like generative AI creates a wild west, of sorts, for bad actors seeking to capitalize on consumer confusion. To that end, Google is suing some scammers who allegedly tricked people into downloading an “unpublished” version of its Bard AI software. Instead of a helpful chatbot, this Bard was reportedly stuffed with malware.

The lawsuit was filed today in California and it alleges that individuals based in Vietnam have been setting up social media pages and running ads encouraging users to download a version of Bard, but this version doesn’t deliver helpful answers on how to cook risotto or whatever. This Bard, once downloaded by some rube, worms its way into the system and steals passwords and social media credentials. The lawsuit notes that these scammers have specifically used Facebook as their preferred distribution method.

Google’s official blog post on the matter notes that it sent over 300 takedown requests before opting for the lawsuit. The suit doesn’t seek financial compensation, but rather an order to stop the alleged fraudsters from setting up similar domains, particularly with US-based domain registrars. The company says that this outcome will “serve as a deterrent and provide a clear mechanism for preventing similar scams in the future.”

The lawsuit goes on to highlight how emerging technologies are ripe for this kind of anti-consumer weaponization. In this case, the alleged scammers said that Bard is a paid service that required a download. In reality, it’s a free web service.

This article originally appeared on Engadget at https://www.engadget.com/google-sues-scammers-that-allegedly-released-a-malware-filled-bard-knockoff-162222150.html?src=rss

How the meandering legal definition of ‘fair use’ cost us Napster but gave us Spotify

The internet's "enshittification," as veteran journalist and privacy advocate Cory Doctorow describes it, began decades before TikTok made the scene. Elder millennials remember the good old days of Napster — followed by the much worse old days of Napster being sued into oblivion along with Grokster and the rest of the P2P sharing ecosystem, until we were left with a handful of label-approved, catalog-sterilized streaming platforms like Pandora and Spotify. Three cheers for corporate copyright litigation.

In his new book The Internet Con: How to Seize the Means of Computation, Doctorow examines the modern social media landscape, cataloging and illustrating the myriad failings and short-sighted business decisions of the Big Tech companies operating the services that promised us the future but just gave us more Nazis. We have both an obligation and responsibility to dismantle these systems, Doctorow argues, and a means to do so with greater interoperability. In this week's Hitting the Books excerpt, Doctorow examines the aftermath of the lawsuits against P2P sharing services, as well as the role that the Digital Millennium Copyright Act's "notice-and-takedown" reporting system and YouTube's "ContentID" scheme play on modern streaming sites.

The Internet Con cover
Verso Publishing

Excerpted from by The Internet Con: How to Seize the Means of Computation by Cory Doctorow. Published by Verso. Copyright © 2023 by Cory Doctorow. All rights reserved.


Seize the Means of Computation

The harms from notice-and-takedown itself don’t directly affect the big entertainment companies. But in 2007, the entertainment industry itself engineered a new, more potent form of notice-and-takedown that manages to inflict direct harm on Big Content, while amplifying the harms to the rest of us. 

That new system is “notice-and-stay-down,” a successor to notice-and-takedown that monitors everything every user uploads or types and checks to see whether it is similar to something that has been flagged as a copyrighted work. This has long been a legal goal of the entertainment industry, and in 2019 it became a feature of EU law, but back in 2007, notice-and-staydown made its debut as a voluntary modification to YouTube, called “Content ID.” 

Some background: in 2007, Viacom (part of CBS) filed a billion-dollar copyright suit against YouTube, alleging that the company had encouraged its users to infringe on its programs by uploading them to YouTube. Google — which acquired YouTube in 2006 — defended itself by invoking the principles behind Betamax and notice-and-takedown, arguing that it had lived up to its legal obligations and that Betamax established that “inducement” to copyright infringement didn’t create liability for tech companies (recall that Sony had advertised the VCR as a means of violating copyright law by recording Hollywood movies and watching them at your friends’ houses, and the Supreme Court decided it didn’t matter). 

But with Grokster hanging over Google’s head, there was reason to believe that this defense might not fly. There was a real possibility that Viacom could sue YouTube out of existence — indeed, profanity-laced internal communications from Viacom — which Google extracted through the legal discovery process — showed that Viacom execs had been hotly debating which one of them would add YouTube to their private empire when Google was forced to sell YouTube to the company. 

Google squeaked out a victory, but was determined not to end up in a mess like the Viacom suit again. It created Content ID, an “audio fingerprinting” tool that was pitched as a way for rights holders to block, or monetize, the use of their copyrighted works by third parties. YouTube allowed large (at first) rightsholders to upload their catalogs to a blocklist, and then scanned all user uploads to check whether any of their audio matched a “claimed” clip. 

Once Content ID determined that a user was attempting to post a copyrighted work without permission from its rightsholder, it consulted a database to determine the rights holder’s preference. Some rights holders blocked any uploads containing audio that matched theirs; others opted to take the ad revenue generated by that video. 

There are lots of problems with this. Notably, there’s the inability of Content ID to determine whether a third party’s use of someone else’s copyright constitutes “fair use.” As discussed, fair use is the suite of uses that are permitted even if the rightsholder objects, such as taking excerpts for critical or transformational purposes. Fair use is a “fact intensive” doctrine—that is, the answer to “Is this fair use?” is almost always “It depends, let’s ask a judge.” 

Computers can’t sort fair use from infringement. There is no way they ever can. That means that filters block all kinds of legitimate creative work and other expressive speech — especially work that makes use of samples or quotations. 

But it’s not just creative borrowing, remixing and transformation that filters struggle with. A lot of creative work is similar to other creative work. For example, a six-note phrase from Katy Perry’s 2013 song “Dark Horse” is effectively identical to a six-note phrase in “Joyful Noise,” a 2008 song by a much less well-known Christian rapper called Flame. Flame and Perry went several rounds in the courts, with Flame accusing Perry of violating his copyright. Perry eventually prevailed, which is good news for her. 

But YouTube’s filters struggle to distinguish Perry’s six-note phrase from Flame’s (as do the executives at Warner Chappell, Perry’s publisher, who have periodically accused people who post snippets of Flame’s “Joyful Noise” of infringing on Perry’s “Dark Horse”). Even when the similarity isn’t as pronounced as in Dark, Joyful, Noisy Horse, filters routinely hallucinate copyright infringements where none exist — and this is by design. 

To understand why, first we have to think about filters as a security measure — that is, as a measure taken by one group of people (platforms and rightsholder groups) who want to stop another group of people (uploaders) from doing something they want to do (upload infringing material). 

It’s pretty trivial to write a filter that blocks exact matches: the labels could upload losslessly encoded pristine digital masters of everything in their catalog, and any user who uploaded a track that was digitally or acoustically identical to that master would be blocked. 

But it would be easy for an uploader to get around a filter like this: they could just compress the audio ever-so-slightly, below the threshold of human perception, and this new file would no longer match. Or they could cut a hundredth of a second off the beginning or end of the track, or omit a single bar from the bridge, or any of a million other modifications that listeners are unlikely to notice or complain about. 

Filters don’t operate on exact matches: instead, they employ “fuzzy” matching. They don’t just block the things that rights holders have told them to block — they block stuff that’s similar to those things that rights holders have claimed. This fuzziness can be adjusted: the system can be made more or less strict about what it considers to be a match. 

Rightsholder groups want the matches to be as loose as possible, because somewhere out there, there might be someone who’d be happy with a very fuzzy, truncated version of a song, and they want to stop that person from getting the song for free. The looser the matching, the more false positives. This is an especial problem for classical musicians: their performances of Bach, Beethoven and Mozart inevitably sound an awful lot like the recordings that Sony Music (the world’s largest classical music label) has claimed in Content ID. As a result, it has become nearly impossible to earn a living off of online classical performance: your videos are either blocked, or the ad revenue they generate is shunted to Sony. Even teaching classical music performance has become a minefield, as painstakingly produced, free online lessons are blocked by Content ID or, if the label is feeling generous, the lessons are left online but the ad revenue they earn is shunted to a giant corporation, stealing the creative wages of a music teacher.

Notice-and-takedown law didn’t give rights holders the internet they wanted. What kind of internet was that? Well, though entertainment giants said all they wanted was an internet free from copyright infringement, their actions — and the candid memos released in the Viacom case — make it clear that blocking infringement is a pretext for an internet where the entertainment companies get to decide who can make a new technology and how it will function.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-internet-con-cory-doctorow-verso-153018432.html?src=rss

FTX founder Sam Bankman-Fried found guilty of fraud, faces up to 110 years in prison

A federal jury has found FTX founder Sam Bankman-Fried guilty on all seven counts of fraud and conspiracy, which he was charged with following the downfall of his cryptocurrency exchange. According to The New York Times, he faces a maximum sentence of 110 years in federal prison. SBF, as he's now infamously known, was arrested in the Bahamas back in December 2022 after the Department of Justice took a close look at his role in the rapid collapse of FTX. The agency examined whether he transferred hundreds of millions of dollars when the exchange filed for bankruptcy. (The company claimed it was hacked after around $600 million disappeared from its funds.) The DoJ also investigated whether FTX broke the law when it moved funds to its sister company, Alameda Research.

During SBF's trial, which took place over the past month, prosecutors argued that he used FTX funds to keep Alameda Research running. The fallen entrepreneur also founded the cryptocurrency hedge fund, which was ran by his girlfriend Caroline Ellison, who was aware that he used FTX customers' money to help Alameda meet its liabilities. Bankman-Fried previously denied that he deliberately misused FTX's funds. 

The Times says his lawyers tried to portray him as a math nerd who had to grapple with "forces largely outside of his control," but the jury clearly disagreed after the prosecution called Ellison and three of Bankman-Fried's former top advisers to the witness stand. Ellison and all of those advisers had pleaded guilty, with the Alameda Research chief admitting that she committed fraud at Bankman-Fried's direction. The FTX founder himself took the stand and said that he "deeply regret not taking a deeper look into" the $8 billion his hedge fund had borrowed from the cryptocurrency exchange. 

Bankman-Fried was charged with committing wire fraud against FTX customers; wire fraud on Alameda Research lenders; conspiracy to commit wire fraud against both; conspiracy to commit securities and commodities fraud on FTX customers; as well as conspiracy to commit money laundering. He is scheduled to be sentenced on March 28, 2024 by US District Judge Lewis A. Kaplan, who also presided over the FTX trial. 

This article originally appeared on Engadget at https://www.engadget.com/ftx-founder-sam-bankman-fried-found-guilty-on-seven-charges-of-fraud-and-conspiracy-012316105.html?src=rss

Unredacted documents in the FTC’s Amazon lawsuit shed light on the company’s secret price-gouging algorithm

It looks like Amazon is hellbent on keeping its spot as the biggest online retailer — even if that means hurting both sellers and customers. In September, the FTC filed a long-expected antitrust lawsuit against Amazon over its alleged use of illegal strategies to stay on top. Details of the suit were previously withheld from the public, but today a mostly unredacted version was released, including details about Amazon's secret pricing tool, known as Project Nessie. These algorithms helped Amazon increase prices by over $1 billion over two years, the FTC alleges.

As Amazon would argue, Amazon's dominance of the online retail space has helped small businesses reach more consumers. But the FTC would argue that over the years, Amazon has become exploitative in its approach. The company continues to increase third-party seller fees, which are taking a toll on smaller businesses and even causing bankruptcy for some. Amazon previously said these claims were baseless, but the documents revealed today show otherwise.

According to the The Wall Street Journal, the internal documents cited in the original complaint show that Amazon executives were well aware of the effects of the company's policies. In the documents, Amazon executives acknowledged that these policies, which included requiring Amazon sellers to have the lowest prices online or risk consequences, had a “punitive aspect.” One executive pointed out that many sellers “live in constant fear” of being penalized by Amazon for not following the ever-changing pricing policy.

The FTC also alleges that the company had been monitoring its sellers and punishing them if they offered lower prices on other platforms, which the agency says is a violation of antitrust laws. The unredacted documents indicate that Amazon has increased prices by over $1 billion between 2016 to 2018 with the use of secret price gouging algorithms known as Project Nessie. It was also revealed that the "take rate" — aka the amount Amazon makes from sellers who use the Fulfillment By Amazon logistics program — increased from 27.6 percent in 2014 to 39.5 percent in 2018. It's unclear if that has changed in more recent years since those numbers remained redacted.

And Amazon isn't just ruining its sellers’ experience. The complaint also revealed Amazon's increased use of ads in search results. Several ad executives at the company acknowledged that these sponsored ads were often irrelevant to the initial search and caused “harm to consumers" and the overall experience on the site.

The FTC alleges that these policies were the brainchild of Jeff Bezos, Amazon’s founder and former chief executive, to increase the company's profit margins.

“Mr. Bezos directly ordered his advertising team to continue to increase the number of advertisements on Amazon by allowing more irrelevant advertisements, because the revenue generated by advertisements eclipsed the revenue lost by degrading consumers’ shopping experience,” the FTC complaint alleges.

This article originally appeared on Engadget at https://www.engadget.com/amazon-ftc-lawsuit-unredacted-documents-project-nessie-secret-price-gouging-algorithm-194800531.html?src=rss

Uber and Lyft must pay $328 million to New York drivers in massive wage theft settlement

Uber and Lyft have agreed to pay a combined $328 million in settlements following a wage theft investigation by the New York attorney general’s office. According to New York AG Letitia James, the companies’ policies “systematically cheated their drivers out of hundreds of millions of dollars in pay and benefits.” They’ll both now have to pay settlement funds to more than 100,000 current and former drivers in New York, and offer both minimum hourly pay rates and paid sick leave.

In the two settlements, Uber has to pay $290 million, while Lyft must pay $38 million. The AG’s office found both Uber and Lyft shortchanged drivers by deducting sales taxes from drivers’ commissions that should have been paid by riders between 2014 and 2017. They also did not offer paid sick leave. As a result of the settlement, drivers outside of New York City will be guaranteed an earnings floor of $26 per hour (NYC drivers already have minimum rates under Taxi & Limousine Commission regulations), and will earn one hour of sick pay for every 30 hours worked. This will be capped at 56 hours per year.

NYC drivers will get $17 per hour for sick leave, while drivers outside of the city will get $26 per hour. Both rates will be adjusted annually for inflation. Drivers can put in a claim for their share of the settlement on the New York Attorney General’s website. The companies will also be required to update their apps to improve the process for putting in sick leave requests and provide support for pay-related questions, plus earnings statements for drivers which explain their compensation in detail.

Uber separately settled with the Department of Labor today as well following two lawsuits over its failure to provide unemployment benefits for drivers. The company will now have to make quarterly payments into the New York State Unemployment Insurance Trust Fund to cover its drivers, and pay an as yet undisclosed amount in retroactive payments going back to 2013.

The New York Taxi Workers Alliance has sued multiple times seeking unemployment benefits for drivers, as the fight over whether they should be considered employees or independent contractors continues. “Today's settlement is a victory for Uber drivers across the state who will no longer be denied timely access to life-saving benefits by Uber in their darkest hour, and New York taxpayers will no longer have to subsidize the billionaires at Uber and Lyft,” the NYWTA and Legal Services NYC said in a statement about the settlement. “Drivers for the state's largest employer will now be able to access unemployment benefits moving forward without endless obstacles and denials.”

New York has been cracking down on app-based service providers in recent years amid a push by the Biden administration to see gig workers classified as employees. A California court, however, slapped down one such bill in March, allowing companies to continue classifying their drivers as contractors. But NY has made progress recently in securing more protections. In September, Uber, GrubHub and DoorDash were told they must pay their delivery workers a minimum wage.

Update, November 2 2023, 3:10PM ET: This story has been updated to include information on a second settlement Uber reached today with the New York Department of Labor, and a statement from the New York Taxi Workers Alliance.

This article originally appeared on Engadget at https://www.engadget.com/uber-and-lyft-must-pay-328-million-to-new-york-drivers-in-massive-wage-theft-settlement-155716817.html?src=rss

Scarlett Johannson takes legal action against AI app that cloned her likeness

Oscar-nominated actor Scarlett Johansson has taken legal action against an AI app developer for using her likeness in an ad without permission, Variety has reported. The 22-second ad promoted an AI image editor called Lisa AI: 90s Yearbook & Avatar, and reportedly used an AI-generated version of Johansson's voice and image.

The ad showed a real clip of Johansson in a Black Widow behind-the-scenes clip, saying "What's up guys? It's Scarlett and I want you to come with me...". It then transitions to AI-generated photos and a cloned version of her voice promoting the AI app. Under the ad is fine print that states: "Images produced by Lisa AI. It has nothing to do with this person." Multiple Lisa AI apps created by Convert Software remain on the App Store and Google Play, according to Variety, but the ad no longer appears on X. 

Johansson is "handling the situation in a legal capacity," said her lawyer Kevin Yorn. "We do not take these things lightly. Per our usual course of action in these circumstances, we will deal with it with all legal remedies that we will have," he added. 

Johansson has one of the best known faces (and voices) in Hollywood and is the spokesperson for high-end companies including Dolce & Gabbana and Louis Vuitton. Given that, it's hard to believe that someone would even attempt to rip off her likeness, if the claim is accurate (and it's not exactly a ringing endorsement for the quality of ads on X). 

The idea of using AI to rip off celebrity likenesses is a relatively new phenomenon, so the legal ramifications are still being worked out. In one notable incident, actor Tom Hanks warned his fans on social media that videos using AI versions of his likeness were being used to fraudulently hawk products

Though it's still a legal grey area, some states have related laws around privacy rights, with California for one allowing civil lawsuits for the unauthorized use in advertising or promotion of someone’s "name, voice, signature, photograph or likeness." 

This article originally appeared on Engadget at https://www.engadget.com/scarlett-johannson-takes-legal-action-against-ai-app-that-cloned-her-likeness-065505106.html?src=rss

Google and Match Group settle antitrust case before it goes to trial

The antitrust lawsuit Epic Games and Match Group have filed against Google was supposed to go to trial on November 6, but now it looks like the video game developer might go at it alone. Google and Match, the parent company of Tinder, OkCupid and Hinge, have reached an agreement and have agreed to drop all claims against each other. According to Bloomberg and The Wall Street Journal, Google has agreed to return the $40 million Match had place in escrow to cover the service fees it would supposedly owe the Alphabet unit while the dispute is ongoing.

Match also announced in its earning report that its apps will be using Google's User Choice Billing program starting on March 31, 2024. Under the program, users will have the option to choose between Google's and the developer's billing systems when purchasing an app or paying for a subscription. If they choose to use Google's system, then Match will have to pay Google 15 percent for recurring subscriptions and 30 percent for one-off payments. Google's cut is reduced to 11 percent and 26 percent, respectively, for payments that go through the developer's provided alternative. The dating services provider said that the terms they agreed on will offset the additional costs its apps will incur implementing the User Choice Billing program over three years starting in 2024.

Tinder's parent company originally sued Google in 2022, accusing it of violating federal and state antitrust laws. Match said that Google previously assured it that it could use its own payment system. However, when it announced a new policy that would require all Android developers to process payments through the Play Store billing system, Google allegedly threatened to remove its apps from the store if it didn't comply. Match also claimed that the company had been rejecting app updates that maintained the payment system it was using.

Later that year, Match had joined up with Epic Games, and the two consolidated their antitrust lawsuit against their common foe. They even expanded their allegations and accused Google of paying major developers hundreds of millions of dollars to keep their apps in the Play Store. Bloomberg says Epic is now scheduled to face Google in court alone on November 2, and the judge is waiting for both parties to decide whether they want a jury to make the decision for their case. Epic had also sued Apple over the same issue, but in Google's case, the court has to acknowledge that Android users can sideload applications to their devices. The video game developer hasn't dropped any hints that it's also hashing out an agreement with the bigger company, but we'll know for sure if the trial still pushes through on November 2.

This article originally appeared on Engadget at https://www.engadget.com/google-and-match-group-settle-antitrust-case-before-it-goes-to-trial-041158809.html?src=rss