Meta’s controversial ad-free subscription is facing scrutiny from EU privacy campaigners

In a bid to comply with updated privacy rules in Europe, Meta recently gave Facebook and Instagram users in the region an ultimatum. They either had to agree to receive targeted ads or sign up for a €10 per month subscription for each app (or stop using them altogether). That would give users the choice of opting out of ad tracking, but they'd have to pay a hefty sum to do so. 

Now, an Austrian privacy group called noyb has filed a complaint against that Meta's actions on behalf of a client in financial distress. The group stated that the subscription price is out of proportion to the value Facebook receives, so it's effectively a false choice for users without the means to pay for a subscription. 

"More than 20% of the EU population are already at risk of poverty," wrote noyb founder noted EU privacy advocate Max Schrems. "For the complainant in our case, as for many others, a ‘Pay or Okay’ system would mean paying the rent or having privacy." 

Citing Meta's own data, noyb said that the company's average revenue per user in Europe was $16.79 between Q3 2022 and Q3 2023, or about €62.88 per user. However, it plans to charge a minimum of €120 per year (more if you sign up on a smartphone), or up to €251.88 ($275.88) to have both Instagram and Facebook. 

noyb notes that 3 to 10 percent of users want personalized ads, but 99.9 percent consent, due to the lack of a true choice. "EU law requires that consent is the genuine free will of the user. Contrary to this law, Meta charges a 'privacy fee' of up to €250 per year if anyone dares to exercise their fundamental right to data protection," said noyb's data protection lawyer Felix Mikolasch. 

Meta's actions are also likely to set off a "domino effect," according to noyb. "Already now, TikTok is reportedly testing an ad-free subscription outside the US. Other app providers could follow in the near future, making online privacy unaffordable." It added that if multiple apps took the same approach, data privacy would be available "only for the rich." 

Meta defended its approach, saying it follows EU laws. "The option for people to purchase a subscription for no ads balances the requirements of European regulators while giving users choice and allowing Meta to continue serving all people in the EU, EEA and Switzerland. In its ruling, the CJEU expressly recognized that a subscription model, like the one we are announcing, is a valid form of consent for an ads funded service," a spokesperson told TechCrunch, referring to a post from last month.

However, European courts have stated that any fee charged to avoid tracking on products must be "necessary" and "appropriate." It also says that consent must be freely given. noyb appears to be targeting those clauses by arguing that the relatively high fees will effectively deter free choice by EU citizens, particularly those in financial difficulty. 

"Fundamental rights are usually available to everyone. How many people would still exercise their right to vote if they had to pay €250 to do so? There were times when fundamental rights were reserved for the rich. It seems Meta wants to take us back for more than a hundred years," Schrems said. 

This article originally appeared on Engadget at

Unsealed complaint says Meta ‘coveted’ under-13s and deceives the public about age enforcement

An unsealed complaint in a lawsuit filed against Meta by 33 states alleges the company is not only aware that children under the age of 13 use its platforms, but has also “coveted and pursued” this demographic for years on Instagram. The document, which was first spotted by The New York Times, claims that Meta has long been dishonest about how it handles underage users’ accounts when they’re discovered, often failing to disable them when reported and continuing to harvest their data.

The newly unsealed complaint, filed on Wednesday, reveals arguments that were previously redacted when attorneys generals from across the US first hit Meta with the lawsuit last month in the California federal court. It alleges the presence of under-13s is an “open secret” at Meta. While the policies on Facebook and Instagram state a person must be at least 13 years old to sign up, children can easily lie about their age — something the lawsuit says Meta is well aware of, and has done little to stop. Instead, when Meta “received over 1.1 million reports of under-13 users on Instagram” from 2019-2023, it “disabled only a fraction of those accounts and routinely continued to collect children’s data without parental consent,” the complaint says.

Meta “routinely violates” the Children’s Online Privacy Protection Act of 1998 (COPPA) by targeting children and collecting their information without parental consent, according to the complaint. The lawsuit also argues that Meta’s platforms manipulate young users into spending unhealthy amounts of time on the apps, promote body dysmorphia and expose them to potentially harmful content. When the lawsuit was first filed in October, a Meta spokesperson said the company was “disappointed” over the chosen course of action, stating, “We share the attorneys general’s commitment to providing teens with safe, positive experiences online.”

Meta earlier this month published a blog post calling for federal legislation to put more responsibility on parents when it comes to kids’ app downloads. Meta's global head of safety, Antigone Davis, proposed a requirement for parents to have approval power over downloads for kids under the age of 16.

This article originally appeared on Engadget at

NVIDIA sued for stealing trade secrets after screensharing blunder showed rival company’s code

NVIDIA is facing a lawsuit filed by French automotive company Valeo after a screensharing blunder by one of its employees. According to Valeo's complaint, Mohammad Moniruzzaman, an engineer for NVIDIA who used to work for its company, had mistakenly showed its source code files on his computer as he was sharing his screen during a meeting with both firms in 2022. Valeo's employees quickly recognized the code and took screenshots before Moniruzzaman was notified of his mistake. 

To note, Valeo and NVIDIA are working together on an advanced parking and driving assistance technology offered by a manufacturer to its customers. Valeo used to be in charge of both software and hardware sides of the manufacturer's parking assistance tech. In 2021, however, the the bigger corporation won the contract to develop its parking assistance software. Valeo wrote in its lawsuit that its former employee, who helped it develop its parking and driving assistance systems, had realized that his exposure and access to its proprietary technologies would make him "exceedingly valuable" to NVIDIA. 

Moniruzzaman allegedly gave his personal email unauthorized access to Valeo's systems to steal "tens of thousands of files" and 6GB of source code shortly after that development. He then left Valeo a few months later and took the stolen information with him when he was given a senior position at NVIDIA, the complaint reads. He also worked on the very same project he was involved in for Valeo, which is why he was present at that video conference. 

Valeo said its former employee admitted to stealing its software and that German police found its documentation and hardware pinned on Moniruzzaman's walls when his home was raided. According to Bloomberg, he was already convicted of infringement of business secrets in a German court and was ordered to pay €14,400 ($15,750) in September. 

In a letter dated June 2022, NVIDIA's lawyers told the plaintiff's counsel that the company "has no interest in Valeo's code or its alleged trade secrets and has taken prompt concrete steps to protect [its] client’s asserted rights." Valeo still sued the company earlier this month, however, and said that NVIDIA has "saved millions, perhaps hundreds of millions, of dollars in development costs, and generated profits that it did not properly earn and to which it was not entitled" by stealing its trade secrets. 

This is but another proof that competition continues to heat up in the autonomous driving market. Back in 2017, Waymo accused Uber of colluding with its former employee, Anthony Levandowski, to steal over 14,000 confidential and proprietary design files. Levandowski was sentenced to 18 months in prison, but he was pardoned six months later by then President Donald Trump. 

This article originally appeared on Engadget at

OpenAI and Microsoft hit with copyright lawsuit from non-fiction authors

OpenAI has been hit with another lawsuit, accusing it of using other people's intellectual property without permission to train its generative AI technology. Only this time, the lawsuit also names Microsoft as a defendant. The complaint was filed by Julian Sancton on behalf of a group of non-fiction authors who said they were not compensated for the use of their books and academic journals in training the company's large language model. 

In their lawsuit, the authors state how they spend years "conceiving, researching, and writing their creations." They accuse OpenAI and Microsoft of refusing to pay authors while building a business "valued into the tens of billions of dollars by taking the combined works of humanity without permission." The companies pretend copyright laws do not exist, the complaint reads, and have "enjoyed enormous financial gain from their exploitation of copyrighted material."

Sancton is the author behind Madhouse at the End of the Earth: The Belgica’s Journey Into the Dark Antarctic, which tells the true survival story of an 1897 polar expedition that got stuck in the ocean in the middle of a sunless Antarctic winter. Sancton spent five years and tens of thousands of dollars to research and write the book. "Such an investment of time and money is feasible for Plaintiff Sancton and other writers because, in exchange for their creative efforts, the Copyright Act grants them 'a bundle of exclusive rights' in their works, including 'the rights to reproduce the copyrighted work[s],'" according to the lawsuit. 

As Forbes notes, OpenAI previously said that content generated by ChatGPT doesn't constitute "derivative work" and, hence, doesn't infringe on any copyright. Sancton's lawsuit is merely the latest complaint against the company over its use of copyrighted work to train its technology. Earlier this year, screenwriter and author also Michael Chabon sued OpenAI for the same thing, as did George R.R. Martin, John Grisham and Jodi Picoult. Comedian Sarah Silverman filed a lawsuit against OpenAI and Meta, as well. Sancton is now seeking damages and injunctive relief for all the proposed class action's defendants. 

This article originally appeared on Engadget at

Binance founder Changpeng Zhao steps down as CEO, will plead guilty to federal charges

Binance CEO Changpeng Zhao is set to plead guilty to federal money laundering charges and step down from his position at the company he founded. Zhao and the cryptocurrency exchange have reached a plea deal with the government, which conducted a multi-year investigation into the company, CNBC reports. As part of the settlement, Binance will forfeit $2.5 billion and pay a $1.8 billion fine. Zhao is slated to personally pay $50 million.

Zhao will be prohibited from having any involvement with Binance for three years. As part of the plea deal, Zhao will plead guilty later on Tuesday to violating and causing a financial institution to violate the Bank Secrecy Act, according to Reuters.

Binance, Zhao and others were accused of failing to institute an effective anti-money laundering program. According to the Justice Department, they willfully violated economic sanctions “in a deliberate and calculated effort to profit from the US market without implementing controls required by US law." Court documents state that the lack of anti-money laundering measures led to Binance facilitating almost $900 million in financial transactions in violation of sanctions against Iran between 2018 and 2022.

In a statement, Zhao confirmed he is stepping down as CEO, with the company's former global head of regional markets Richard Teng taking over the top job. "Today, I stepped down as CEO of Binance," Zhao wrote on X. "Admittedly, it was not easy to let go emotionally. But I know it is the right thing to do. I made mistakes, and I must take responsibility. This is best for our community, for Binance, and for myself." 

Zhao now plans to take a break before perhaps getting more involved in investing. However, "I can’t see myself being a CEO driving a startup again. I am content being an one-shot (lucky) entrepreneur."

The settlement resolves criminal charges related to breaching sanctions regulations, conspiracy and conducting an unlicensed money transmitter business. Meanwhile, former compliance chief Samuel Lim will reportedly face charges as part of the deal.

This is a major settlement between the company and agencies such as the Commodity Futures Trading Commission (CFTC) and the Treasury Department. The CFTC charged Binance, Zhao and Lim with violating its rules, as well as the Commodity Exchange Act, earlier this year.

“Binance turned a blind eye to its legal obligations in the pursuit of profit. Its willful failures allowed money to flow to terrorists, cybercriminals, and child abusers through its platform,” Treasury Secretary Janet Yellen said in a statement. “Today’s historic penalties and monitorship to ensure compliance with US law and regulations mark a milestone for the virtual currency industry. Any institution, wherever located, that wants to reap the benefits of the US financial system must also play by the rules that keep us all safe from terrorists, foreign adversaries, and crime, or face the consequences.”

Binance will remain in operation, albeit under stricter rules. It will need to ensure it abides by anti-money laundering regulations by beefing up its compliance program. The company will also have to appoint an independent compliance monitor.

In June, the Securities and Exchange Commission sued Binance and Zhao, alleging that they helped US traders bypass restrictions and violated securities laws by, among other things, mishandling funds. The SEC also claimed that (in similar allegations to those laid against rival exchange FTX) Binance commingled billions of dollars of customer money with the company's own funds. The SEC charges were not resolved in this settlement.

This article originally appeared on Engadget at

X lawsuit accuses Media Matters of running a campaign to drive advertisers away

X has filed a lawsuit against media watchdog group Media Matters over the latter's research that showed ads on the social network appearing next to antisemitic content. The company's owner, Elon Musk, promised to file a "thermonuclear lawsuit" against the organization late last week following an advertiser exodus. In its complaint, X said Media Matters "knowingly and maliciously manufactured side-by-side images depicting advertisers' posts on X Corp.'s social media platform beside Neo-Nazi and white national fringe content." It added that the group portrayed the "manufactured images" as if they represented the typical user's experience in the platform. "Media Matters designed both these images and the resulting media strategy to drive advertisers from the platform and destroy X Corp," the company wrote. 

As TechCrunch notes, though, Media Matters didn't exactly "manufacture" the images it used with its research. Based on X's own investigation as it detailed in its lawsuit, the organization used an account older than 30 days to bypass the website's ad filters to follow a set of users known to produce "extreme, fringe content" along with the biggest advertisers on the platform. The group then allegedly kept on scrolling and refreshing its feed to generate "between 13 to 15 times more advertisements per hour than viewed by the average X user." X said the watchdog didn't provide any context regarding the "forced, inauthentic nature" of the advertisements it saw."

In a response to Media Matters' research, X CEO Linda Yaccarino said "not a single authentic user on X saw IBM's, Comcast's, or Oracle's ads next to the content in Media Matters' article." She added that "only two users saw Apple's ad next to the content, at least one of which was Media Matters," confirming that the organization did see the advertisements, even if it had to create the right conditions for them. After Yaccarino released her statement, Media Matters head Angelo Carusone retweeted several posts from seemingly authentic users showing ads for searches and tags such as "killjews" and "HeilHitler." We reached out to the organization about the lawsuit, and a spokesperson told Engadget: "This is a frivolous lawsuit meant to bully X's critics into silence. Media Matters stands behind its reporting and looks forward to winning in court."

Aside from X's lawsuit, Media Matters also has to grapple with an investigation by Ken Paxton, the Attorney General of Texas. Paxton said his office is looking into Media Matters, which he called "a radical anti-free speech" organization, for potential fraudulent activity. He said he's investigating the watchdog to "ensure that the public has not been deceived by the schemes of radical left-wing organizations who would like nothing more than to limit freedom by reducing participation in the public square."

The media watchdog had published its findings after X owner Elon Musk responded to a tweet that said Jews pushed "hatred against whites that they claim to want people to stop using against them." Musk wrote: "You have said the actual truth." Several big-name advertisers had pulled their campaigns from the platform following the incidents, including IBM, Apple, Disney, Paramount and Comcast. Meanwhile, Lionsgate specifically cited Elon's tweet as the reason for pulling its ads. 

According to Fortune, Yaccarino held an all-hands meeting after X filed the lawsuit to confirm to staff members that some customers' advertisements are still on pause. When asked about what the best outcome for the lawsuit would be, the CEO said a win would validate that X was right.

"They have a long history of being an activist organization, to force a narrative and not allow people of the world to make their own decisions," she reportedly responded. "I think one of the main goals that underscores the dedication to truth and fairness and that is that we allow people a global Town Square, to seek out their own information and make their own decisions. So exposing Media Matters to train people’s rights to make their own decisions will be a validation that X was right, and this was an inauthentic manipulation."

Update, November 21, 2023, 12:14AM ET: Added information from Fortune's report about X's all-hands meeting. 

This article originally appeared on Engadget at

Apple joins Meta and ByteDance in contesting the EU’s ‘gatekeeper’ designation

Apple has joined Meta and TikTok owner ByteDance in contesting their platforms’ definitions as part of the EU’s Digital Markets Act (DMA). The legislation allows regulators to designate dominant companies’ services or platforms as “gatekeepers,” or big and powerful enough to act as a bottleneck between businesses and customers, which it can then fine for prohibited behavior. It currently targets 22 gatekeeper services run by six Big Tech companies (Apple, Microsoft, Alphabet’s Google, Meta, Amazon and ByteDance’s TikTok). The law encourages consumer-friendly competition, preventing businesses from imposing unfair conditions on customers.

The EU Court of Justice (via Reuters) posted on X Friday about Apple’s formal objection, announcing that the iPhone maker had joined Meta and ByteDance in contesting its decisions. Although the complaint details aren’t public, Bloomberg News reported last week that Apple would challenge the gatekeeper designations of both the App Store and iMessage. The company said this week it would soon support RCS on iPhone, potentially removing one of the EU’s bones to pick with iMessage consumer lock-in. 

Microsoft and Google have reportedly accepted their DMA designations, while Meta and ByteDance contested theirs. Meta specifically questioned Messenger and Marketplace’s gatekeeper labels, seeking to clarify why they were included. (Meta didn’t challenge Facebook, Instagram and WhatsApp’s inclusion.) The company argued Marketplace is a consumer-to-consumer service and Messenger is a chat feature on Facebook, not an online intermediary.

Meanwhile, ByteDance argues that TikTok is a challenger in the social market rather than an established gatekeeper. It claimed designating its platform as such would only serve to protect more established companies.

Like the Digital Services Act (DSA), the DMA has significant teeth. Companies failing to comply can face fines of up to 10 percent of their global turnover, up to 20 percent for repeat offenders and periodic fines of up to five percent of their average daily turnover. Other penalties, including the divestiture of parts of a business, could also be included following market investigations.

This article originally appeared on Engadget at

Google, Meta and other social media companies will be forced to defend teen addiction lawsuits

US District Judge Yvonne Gonzalez Rogers has ruled that the companies that own and run the most popular social networks today will have the face lawsuits blaming them for teenagers' social media addiction. According to Bloomberg Law, that means Google, which owns YouTube, Meta which runs Facebook and Instagram, ByteDance, which owns TikTok, and Snap can't get out of hundreds of federal lawsuits filed against them over the past couple of years. 

Rogers, who'll be overseeing the cases, disagreed with the companies' argument that they're not liable for personal injury claims under the First Amendment and Section 230 of the Communications Decency Act. Section 230 protects publishers from what their users post on their platforms, but the judge said the lawsuits cover more than just third-party content. Further, she said the companies had failed to explain why they shouldn't be held responsible for other complaints against them, including defective parental controls, the failure to implement effective age verification systems and adding barriers to the account deactivation process. At the same time, the just dismissed some of the complaints, such as the ones suing the companies for failing to limit certain kinds of content. 

The lawsuits in questions were filed on behalf of minors across the country. In 2022, a mother from Connecticut sued Meta and Snap, accusing them of causing an addiction in her 11-year-old daughter who took her own life. In October this year, Meta was sued by 41 states as well as the District of Columbia, accusing the company of knowing that its "addictive" features were harmful to children and teens. Companies like Meta have been facing increased scrutiny over the past couple of years after former employee Frances Haugen revealed an internal Facebook research that found Instagram to be "harmful for a sizable percentage of teens." 

Google spokesperson José Castañeda told Bloomberg Law that protecting children has always been core to the company's work. "In collaboration with child development specialists, we have built age-appropriate experiences for kids and families on YouTube, and provide parents with robust controls," he added. "The allegations in these complaints are simply not true." A TikTok spokesperson gave Reuters a similar statement and said the app has "robust safety policies and parental controls."

This article originally appeared on Engadget at

Google sues scammers that allegedly released a malware-filled Bard knockoff

The hype surrounding emerging technologies like generative AI creates a wild west, of sorts, for bad actors seeking to capitalize on consumer confusion. To that end, Google is suing some scammers who allegedly tricked people into downloading an “unpublished” version of its Bard AI software. Instead of a helpful chatbot, this Bard was reportedly stuffed with malware.

The lawsuit was filed today in California and it alleges that individuals based in Vietnam have been setting up social media pages and running ads encouraging users to download a version of Bard, but this version doesn’t deliver helpful answers on how to cook risotto or whatever. This Bard, once downloaded by some rube, worms its way into the system and steals passwords and social media credentials. The lawsuit notes that these scammers have specifically used Facebook as their preferred distribution method.

Google’s official blog post on the matter notes that it sent over 300 takedown requests before opting for the lawsuit. The suit doesn’t seek financial compensation, but rather an order to stop the alleged fraudsters from setting up similar domains, particularly with US-based domain registrars. The company says that this outcome will “serve as a deterrent and provide a clear mechanism for preventing similar scams in the future.”

The lawsuit goes on to highlight how emerging technologies are ripe for this kind of anti-consumer weaponization. In this case, the alleged scammers said that Bard is a paid service that required a download. In reality, it’s a free web service.

This article originally appeared on Engadget at

How the meandering legal definition of ‘fair use’ cost us Napster but gave us Spotify

The internet's "enshittification," as veteran journalist and privacy advocate Cory Doctorow describes it, began decades before TikTok made the scene. Elder millennials remember the good old days of Napster — followed by the much worse old days of Napster being sued into oblivion along with Grokster and the rest of the P2P sharing ecosystem, until we were left with a handful of label-approved, catalog-sterilized streaming platforms like Pandora and Spotify. Three cheers for corporate copyright litigation.

In his new book The Internet Con: How to Seize the Means of Computation, Doctorow examines the modern social media landscape, cataloging and illustrating the myriad failings and short-sighted business decisions of the Big Tech companies operating the services that promised us the future but just gave us more Nazis. We have both an obligation and responsibility to dismantle these systems, Doctorow argues, and a means to do so with greater interoperability. In this week's Hitting the Books excerpt, Doctorow examines the aftermath of the lawsuits against P2P sharing services, as well as the role that the Digital Millennium Copyright Act's "notice-and-takedown" reporting system and YouTube's "ContentID" scheme play on modern streaming sites.

The Internet Con cover
Verso Publishing

Excerpted from by The Internet Con: How to Seize the Means of Computation by Cory Doctorow. Published by Verso. Copyright © 2023 by Cory Doctorow. All rights reserved.

Seize the Means of Computation

The harms from notice-and-takedown itself don’t directly affect the big entertainment companies. But in 2007, the entertainment industry itself engineered a new, more potent form of notice-and-takedown that manages to inflict direct harm on Big Content, while amplifying the harms to the rest of us. 

That new system is “notice-and-stay-down,” a successor to notice-and-takedown that monitors everything every user uploads or types and checks to see whether it is similar to something that has been flagged as a copyrighted work. This has long been a legal goal of the entertainment industry, and in 2019 it became a feature of EU law, but back in 2007, notice-and-staydown made its debut as a voluntary modification to YouTube, called “Content ID.” 

Some background: in 2007, Viacom (part of CBS) filed a billion-dollar copyright suit against YouTube, alleging that the company had encouraged its users to infringe on its programs by uploading them to YouTube. Google — which acquired YouTube in 2006 — defended itself by invoking the principles behind Betamax and notice-and-takedown, arguing that it had lived up to its legal obligations and that Betamax established that “inducement” to copyright infringement didn’t create liability for tech companies (recall that Sony had advertised the VCR as a means of violating copyright law by recording Hollywood movies and watching them at your friends’ houses, and the Supreme Court decided it didn’t matter). 

But with Grokster hanging over Google’s head, there was reason to believe that this defense might not fly. There was a real possibility that Viacom could sue YouTube out of existence — indeed, profanity-laced internal communications from Viacom — which Google extracted through the legal discovery process — showed that Viacom execs had been hotly debating which one of them would add YouTube to their private empire when Google was forced to sell YouTube to the company. 

Google squeaked out a victory, but was determined not to end up in a mess like the Viacom suit again. It created Content ID, an “audio fingerprinting” tool that was pitched as a way for rights holders to block, or monetize, the use of their copyrighted works by third parties. YouTube allowed large (at first) rightsholders to upload their catalogs to a blocklist, and then scanned all user uploads to check whether any of their audio matched a “claimed” clip. 

Once Content ID determined that a user was attempting to post a copyrighted work without permission from its rightsholder, it consulted a database to determine the rights holder’s preference. Some rights holders blocked any uploads containing audio that matched theirs; others opted to take the ad revenue generated by that video. 

There are lots of problems with this. Notably, there’s the inability of Content ID to determine whether a third party’s use of someone else’s copyright constitutes “fair use.” As discussed, fair use is the suite of uses that are permitted even if the rightsholder objects, such as taking excerpts for critical or transformational purposes. Fair use is a “fact intensive” doctrine—that is, the answer to “Is this fair use?” is almost always “It depends, let’s ask a judge.” 

Computers can’t sort fair use from infringement. There is no way they ever can. That means that filters block all kinds of legitimate creative work and other expressive speech — especially work that makes use of samples or quotations. 

But it’s not just creative borrowing, remixing and transformation that filters struggle with. A lot of creative work is similar to other creative work. For example, a six-note phrase from Katy Perry’s 2013 song “Dark Horse” is effectively identical to a six-note phrase in “Joyful Noise,” a 2008 song by a much less well-known Christian rapper called Flame. Flame and Perry went several rounds in the courts, with Flame accusing Perry of violating his copyright. Perry eventually prevailed, which is good news for her. 

But YouTube’s filters struggle to distinguish Perry’s six-note phrase from Flame’s (as do the executives at Warner Chappell, Perry’s publisher, who have periodically accused people who post snippets of Flame’s “Joyful Noise” of infringing on Perry’s “Dark Horse”). Even when the similarity isn’t as pronounced as in Dark, Joyful, Noisy Horse, filters routinely hallucinate copyright infringements where none exist — and this is by design. 

To understand why, first we have to think about filters as a security measure — that is, as a measure taken by one group of people (platforms and rightsholder groups) who want to stop another group of people (uploaders) from doing something they want to do (upload infringing material). 

It’s pretty trivial to write a filter that blocks exact matches: the labels could upload losslessly encoded pristine digital masters of everything in their catalog, and any user who uploaded a track that was digitally or acoustically identical to that master would be blocked. 

But it would be easy for an uploader to get around a filter like this: they could just compress the audio ever-so-slightly, below the threshold of human perception, and this new file would no longer match. Or they could cut a hundredth of a second off the beginning or end of the track, or omit a single bar from the bridge, or any of a million other modifications that listeners are unlikely to notice or complain about. 

Filters don’t operate on exact matches: instead, they employ “fuzzy” matching. They don’t just block the things that rights holders have told them to block — they block stuff that’s similar to those things that rights holders have claimed. This fuzziness can be adjusted: the system can be made more or less strict about what it considers to be a match. 

Rightsholder groups want the matches to be as loose as possible, because somewhere out there, there might be someone who’d be happy with a very fuzzy, truncated version of a song, and they want to stop that person from getting the song for free. The looser the matching, the more false positives. This is an especial problem for classical musicians: their performances of Bach, Beethoven and Mozart inevitably sound an awful lot like the recordings that Sony Music (the world’s largest classical music label) has claimed in Content ID. As a result, it has become nearly impossible to earn a living off of online classical performance: your videos are either blocked, or the ad revenue they generate is shunted to Sony. Even teaching classical music performance has become a minefield, as painstakingly produced, free online lessons are blocked by Content ID or, if the label is feeling generous, the lessons are left online but the ad revenue they earn is shunted to a giant corporation, stealing the creative wages of a music teacher.

Notice-and-takedown law didn’t give rights holders the internet they wanted. What kind of internet was that? Well, though entertainment giants said all they wanted was an internet free from copyright infringement, their actions — and the candid memos released in the Viacom case — make it clear that blocking infringement is a pretext for an internet where the entertainment companies get to decide who can make a new technology and how it will function.

This article originally appeared on Engadget at