What happened to Washington’s wildlife after the largest dam removal in US history

The man made flood that miraculously saved our heroes at the end of O Brother Where Art Thou were an actual occurrence in the 19th and 20th century — and a fairly common one at that — as river valleys across the American West were dammed up and drowned out at the altar of economic progress and electrification. Such was the case with Washington State's Elwha river in the 1910s. Its dam provided the economic impetus to develop the Olympic Peninsula but also blocked off nearly 40 miles of river from the open ocean, preventing native salmon species from making their annual spawning trek. However, after decades of legal wrangling by the Lower Elwha Klallam Tribe, the biggest dams on the river today are the kind made by beavers. 

In this week's Hitting the Books selection, Eat, Poop, Die: How Animals Make Our World, University of Vermont conservation biologist Joe Roman recounts how quickly nature can recover when a 108-foot tall migration barrier is removed from the local ecosystem. This excerpt discusses the naturalists and biologists who strive to understand how nutrients flow through the Pacific Northwest's food web, and the myriad ways it's impacted by migratory salmon. The book as a whole takes a fascinating look at how the most basic of biological functions (yup, poopin!) of even just a few species can potentially impact life in every corner of the planet.   

white background with black text, images of sundry wildlife, none of whom are dropping deuces.
Hatchette Books

Excerpted from by Eat, Poop, Die: How Animals Make Our World by Joe Roman. Published by Hachette Book Group. Copyright © 2023 by Joe Roman. All rights reserved.


When construction began in 1910, the Elwha Dam was designed to attract economic development to the Olympic Peninsula in Washington, supplying the growing community of Port Angeles with electric power. It was one of the first high-head dams in the region, with water moving more than a hundred yards from the reservoir to the river below. Before the dam was built, the river hosted ten anadromous fish runs. All five species of Pacific salmon — pink, chum, sockeye, Chinook, and coho — were found in the river, along with bull trout and steelhead. In a good year, hundreds of thousands of salmon ascended the Elwha to spawn. But the contractors never finished the promised fish ladders. As a result, the Elwha cut off most of the watershed from the ocean and 90 percent of migratory salmon habitat.

Thousands of dams block the rivers of the world, decimating fish populations and clogging nutrient arteries from sea to mountain spring. Some have fish ladders. Others ship fish across concrete walls. Many act as permanent barriers to migration for thousands of species.

By the 1980s, there was growing concern about the effect of the Elwha on native salmon. Populations had declined by 95 per cent, devastating local wildlife and Indigenous communities. River salmon are essential to the culture and economy of the Lower Elwha Klallam Tribe. In 1986, the tribe filed a motion through the Federal Energy Regulatory Commission to stop the relicensing of the Elwha Dam and the Glines Canyon Dam, an upstream impoundment that was even taller than the Elwha. By blocking salmon migration, the dams violated the 1855 Treaty of Point No Point, in which the Klallam ceded a vast amount of the Olympic Peninsula on the stipulation that they and all their descendants would have “the right of taking fish at usual and accustomed grounds.” The tribe partnered with environmental groups, including the Sierra Club and the Seattle Audubon Society, to pressure local and federal officials to remove the dams. In 1992, Congress passed the Elwha River Ecosystem and Fisheries Restoration Act, which authorized the dismantling of the Elwha and Glines Canyon Dams.

The demolition of the Elwha Dam was the largest dam-removal project in history; it cost $350 million and took about three years. Beginning in September 2011, coffer dams shunted water to one side as the Elwha Dam was decommissioned and destroyed. The Glines Canyon was more challenging. According to Pess, a “glorified jackhammer on a floating barge” was required to dismantle the two-hundred-foot impoundment. The barge didn’t work when the water got low, so new equipment was helicoptered in. By 2014, most of the dam had come down, but rockfall still blocked fish passage. It took another year of moving rocks and concrete before the fish had full access to the river.

The response of the fish was quick, satisfying, and sometimes surprising. Elwha River bull trout, landlocked for more than a century, started swimming back to the ocean. The Chinook salmon in the watershed increased from an average of about two thousand to four thousand. Many of the Chinook were descendants of hatchery fish, Pess told me over dinner at Nerka. “If ninety percent of your population prior to dam removal is from a hatchery, you can’t just assume that a totally natural population will show up right away.” Steelhead trout, which had been down to a few hundred, now numbered more than two thousand.

Within a few years, a larger mix of wild and local hatchery fish had moved back to the Elwha watershed. And the surrounding wildlife responded too. The American dipper, a river bird, fed on salmon eggs and insects infused with the new marine-derived nutrients. Their survival rates went up, and the females who had access to fish became healthier than those without. They started having multiple broods and didn’t have to travel so far for their food, a return, perhaps, to how life was before the dam. A study in nearby British Columbia showed that songbird abundance and diversity increased with the number of salmon. They weren’t eating the fish — in fact, they weren’t even present during salmon migration. But they were benefiting from the increase in insects and other invertebrates.

Just as exciting, the removal of the dams rekindled migratory patterns that had gone dormant. Pacific lamprey started traveling up the river to breed. Bull trout that had spent generations in the reservoir above the dam began migrating out to sea. Rainbow trout swam up and down the river for the first time in decades. Over the years, the river started to look almost natural as the sediments that had built up behind the dams washed downstream.

The success on the Elwha could be the start of something big, encouraging the removal of other aging dams. There are plans to remove the Enloe Dam, a fifty-four-foot concrete wall in northern Washington, which would open up two hundred miles of river habitat for steelhead and Chinook salmon. Critically endangered killer whales, downstream off the coast of the Pacific Northwest, would benefit from this boost in salmon, and as there are only seventy individuals remaining, they need every fish they can get.

The spring Chinook salmon run on the Klamath River in Northern California is down 98 percent since eight dams were constructed in the twentieth century. Coho salmon have also been in steep decline. In the next few years, four dams are scheduled to come down with the goal of restoring salmon migration. Farther north, the Snake River dams could be breached to save the endangered salmon of Washington State. If that happens, historic numbers of salmon could come back — along with the many species that depended on the energy and nutrients they carry upstream.

Other dams are going up in the West — dams of sticks and stones and mud. Beaver dams help salmon by creating new slow-water habitats, critical for juvenile salmon. In Washington, beaver ponds cool the streams, making them more productive for salmon. In Alaska, the ponds are warmer, and the salmon use them to help metabolize what they eat. Unlike the enormous concrete impoundments, designed for stability, beaver dams are dynamic, heterogeneous landscapes that salmon can easily travel through. Beavers eat, they build dams, they poop, they move on. We humans might want things to be stable, but Earth and its creatures are dynamic.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-eat-poop-die-joe-roman-hatchette-books-153032502.html?src=rss

Humane’s Ai Pin costs $699 and ships in early 2024, which is about all we know for certain

Wearable startup Humane AI has been dripping details about its upcoming device, the AI Pin, for months now. We firs saw it at a TED Talk in May and, more recently, got a glimpse of its promised capabilities at Paris Fashion Week, ahead of Thursday's official unveiling. However many questions regarding how the wearable AI will actually do what it says it will remain to be answered.

Here's what we do know: Humane is a much-hyped startup founded by former Apple employees. Its first product is the Humane AI Pin, a pocket-worn wearable AI assistant that can reportedly perform the tasks that many modern cellphones and digital assistants do, but in a radically different form factor. It has no screen, instead reportedly operating primarily through voice commands and occasionally through a virtual screen projected onto the user's hand. It costs $700 plus another $24 because Humane insisted on launching its own MVNO (mobile virtual network operator) on top of T-Mobile's network. That $24/month "Humane Subscription" includes a dedicated cell phone number for the Pin with unlimited talk, text and data, rather than allow the device to tether to your existing phone. 

Humane AI Pin
Humane AI

The device itself will be available in three colors — Eclipse, Equinox, and Lunar — when orders begin shipping in early 2024. The magnetic clip that affixes the device to your clothing doubles as the battery storage and includes a pair of backup batteries for users to keep with them. The AI Pin also sports an ultra-wide RGB camera, depth and motion sensors, all of which allow "the device to see the world as you see it," per the company's release.

The AI Pin will reportedly run on a Snapdragon processor with a dedicated Qualcomm AI Engine supporting its custom Cosmos OS. Its "entirely new AI software framework, the Ai Bus," reportedly removes the need to actually download content to the device itself. Instead, it "quickly understands what you need, connecting you to the right AI experience or service instantly." Collaborations with both Microsoft and OpenAI will reportedly give the AI Pin, "access to some of the world’s most powerful AI models and platforms." 

There is still much we don't know about the AI Pin, however, like how long each battery module lasts and how sensitive the system's anti-tamper system is that will lock down a "compromised" device. Live demonstrations of the technology have been rare to date and hands-on opportunities nearly nonexistent. Humane is hosting a debut event Thursday afternoon where, presumably, functional iterations of the AI Pin will be on display.

This article originally appeared on Engadget at https://www.engadget.com/humanes-ai-pin-costs-699-and-ships-in-early-2024-which-is-about-all-we-know-for-certain-181048809.html?src=rss

Humane’s Ai Pin costs $699 and ships in early 2024, which is about all we know for certain

Wearable startup Humane AI has been dripping details about its upcoming device, the AI Pin, for months now. We firs saw it at a TED Talk in May and, more recently, got a glimpse of its promised capabilities at Paris Fashion Week, ahead of Thursday's official unveiling. However many questions regarding how the wearable AI will actually do what it says it will remain to be answered.

Here's what we do know: Humane is a much-hyped startup founded by former Apple employees. Its first product is the Humane AI Pin, a pocket-worn wearable AI assistant that can reportedly perform the tasks that many modern cellphones and digital assistants do, but in a radically different form factor. It has no screen, instead reportedly operating primarily through voice commands and occasionally through a virtual screen projected onto the user's hand. It costs $700 plus another $24 because Humane insisted on launching its own MVNO (mobile virtual network operator) on top of T-Mobile's network. That $24/month "Humane Subscription" includes a dedicated cell phone number for the Pin with unlimited talk, text and data, rather than allow the device to tether to your existing phone. 

Humane AI Pin
Humane AI

The device itself will be available in three colors — Eclipse, Equinox, and Lunar — when orders begin shipping in early 2024. The magnetic clip that affixes the device to your clothing doubles as the battery storage and includes a pair of backup batteries for users to keep with them. The AI Pin also sports an ultra-wide RGB camera, depth and motion sensors, all of which allow "the device to see the world as you see it," per the company's release.

The AI Pin will reportedly run on a Snapdragon processor with a dedicated Qualcomm AI Engine supporting its custom Cosmos OS. Its "entirely new AI software framework, the Ai Bus," reportedly removes the need to actually download content to the device itself. Instead, it "quickly understands what you need, connecting you to the right AI experience or service instantly." Collaborations with both Microsoft and OpenAI will reportedly give the AI Pin, "access to some of the world’s most powerful AI models and platforms." 

There is still much we don't know about the AI Pin, however, like how long each battery module lasts and how sensitive the system's anti-tamper system is that will lock down a "compromised" device. Live demonstrations of the technology have been rare to date and hands-on opportunities nearly nonexistent. Humane is hosting a debut event Thursday afternoon where, presumably, functional iterations of the AI Pin will be on display.

This article originally appeared on Engadget at https://www.engadget.com/humanes-ai-pin-costs-699-and-ships-in-early-2024-which-is-about-all-we-know-for-certain-181048809.html?src=rss

Google’s AI-powered search feature goes global with a 120-country expansion

Google's Search Generative Experience (SGE), which currently provides generative AI summaries at the top of the search results page for select users, is about to be much more available. Just six months after its debut at I/O 2023, the company announced Wednesday that SGE is expanding to Search Labs users in 120 countries and territories, gaining support for four additional languages and receiving a handful of helpful new features.

Unlike its frenetic rollout of the Bard chatbot in March, Google has taken a slightly more measured tone in distributing its AI search assistant. The company began with English language searches in the US in May, expanded to English-language users in India and Japan in August and on to teen users in September. As of Wednesday, users from Brazil to Bhutan can give the feature a try. In addition to English, SGE now supports Spanish, Portuguese, Korean and Indonesian (in addition to the existing English, Hindi and Japanese) so you'll be able to search and converse with the assistant in natural language, whichever form it might take. These features arrive on Chrome desktop Wednesday with the Search Labs for Android app versions slowly rolling out over the coming week.

Among SGE's new features is an improved follow-up function where users can ask additional questions of the assistant directly on the search results page. Like a mini-Bard window tucked into the generated summary, the new feature enables users to drill down on a subject without leaving the results page or even needing to type their queries out. Google will reportedly restrict ads to specific, denoted, areas of the page so as to avoid confusion between them and the generated content. Users can expect follow-ups to start showing up in the coming weeks. They're only for English language users in the US to start but will likely expand as Google continues to iterate the technology. 

SGE will start helping with clarifying ambiguous translation terms as well. For example, if you're trying to translate "Is there a tie?" into Spanish, both the output, the gender and speaker's intention are going to change if you're talking about a tie, as in a draw between two competitors (e.g. "un empate") and for the tie you wear around your neck ("una corbata"). This new feature will automatically recognize such words and highlight them for you to click on, which pops up a window asking you to pick between the two versions. This is going to be super helpful with languages that, say, think of cars as boys but bicycles as girls, and you need to specify the version you're intending. Luckily, Spanish is one of those languages and this capability is coming first to US users for English-to-Spanish translations.

Finally, Google plans to expand its interactive definitions normally found in the generated summaries for educational topics like science, history or economics to coding and health related searches as well. This update should arrive within the next month, again, first for English language users in the US before spreading to more territories in the coming months. 

This article originally appeared on Engadget at https://www.engadget.com/googles-ai-powered-search-feature-goes-global-with-a-120-country-expansion-180028037.html?src=rss

NVIDIA’s Eos supercomputer just broke its own AI training benchmark record

Depending on the hardware you're using, training a large language model of any significant size can take weeks, months, even years to complete. That's no way to do business — nobody has the electricity and time to be waiting that long. On Wednesday, NVIDIA unveiled the newest iteration of its Eos supercomputer, one powered by more than 10,000 H100 Tensor Core GPUs and capable of training a 175 billion-parameter GPT-3 model on 1 billion tokens in under four minutes. That's three times faster than the previous benchmark on the MLPerf AI industry standard, which NVIDIA set just six months ago.

Eos represents an enormous amount of compute. It leverages 10,752 GPUs strung together using NVIDIA's Infiniband networking (moving a petabyte of data a second) and 860 terabytes of high bandwidth memory (36PB/sec aggregate bandwidth and 1.1PB sec interconnected) to deliver 40 exaflops of AI processing power. The entire cloud architecture is comprised of 1344 nodes — individual servers that companies can rent access to for around $37,000 a month to expand their AI capabilities without building out their own infrastructure. 

In all, NVIDIA set six records in nine benchmark tests: the 3.9 minute notch for GPT-3, a 2.5 minute mark to to train a Stable Diffusion model using 1,024 Hopper GPUs, a minute even to train DLRM, 55.2 seconds for RetinaNet, 46 seconds for 3D U-Net and the BERT-Large model required just 7.2 seconds to train.

NVIDIA was quick to note that the 175 billion parameter version of GPT-3 used in the benchmarking is not the full-sized iteration of the model (neither was the Stable Diffusion model). The larger GPT-3 offers around 3.7 trillion parameters and is just flat out too big and unwieldy for use as a benchmarking test. For example, it'd take 18 months to train it on the older A100 system with 512 GPUs — though, Eos needs just eight days. 

So instead, NVIDIA and MLCommons, which administers the MLPerf standard, leverage a more compact version that uses 1 billion tokens (the smallest denominator unit of data that generative AI systems understand). This test uses a GPT-3 version with the same number of potential switches to flip (s the full-size (those 175 billion parameters), just a much more manageable data set to use in it (a billion tokens vs 3.7 trillion).

The impressive improvement in performance, granted, came from the fact that this recent round of tests employed 10,752 H100 GPUs compared to the 3,584 Hopper GPUs the company used in June's benchmarking trials. However NVIDIA explains that despite tripling the number of GPUs, it managed to maintain 2.8x scaling in performance — an 93 percent efficiency rate — through the generous use of software optimization.

"Scaling is a wonderful thing," Salvator said."But with scaling, you're talking about more infrastructure, which can also mean things like more cost. An efficiently scaled increase means users are "making the best use of your of your infrastructure so that you can basically just get your work done as fast [as possible] and get the most value out of the investment that your organization has made."

The chipmaker was not alone in its development efforts. Microsoft's Azure team submitted a similar 10,752 H100 GPU system for this round of benchmarking, and achieved results within two percent of NVIDIA's.

"[The Azure team have] been able to achieve a performance that's on par with the Eos supercomputer," Dave Salvator Director of Accelerated Computing Products at NVIDIA, told reporters during a Tuesday prebrief. What's more "they are using Infiniband, but this is a commercially available instance. This isn't some pristine laboratory system that will never have actual customers seeing the benefit of it. This is the actual instance that Azure makes available to its customers."

 NVIDIA plans to apply these expanded compute abilities to a variety of tasks, including the company's ongoing work in foundational model development, AI-assisted GPU design, neural rendering, multimodal generative AI and autonomous driving systems.

"Any good benchmark looking to maintain its market relevance has to continually update the workloads it's going to throw at the hardware to best reflect the market it's looking to serve," Salvator said, noting that MLCommons has recently added an additional benchmark for testing model performance on Stable Diffusion tasks. "This is another exciting area of generative AI where we're seeing all sorts of things being created" — from programming code to discovering protein chains.

These benchmarks are important because, as Salvator points out, the current state of generative AI marketing can a bit of a "Wild West." The lack of stringent oversight and regulation means, "we sometimes see with certain AI performance claims where you're not quite sure about all the parameters that went into generating those particular claims." MLPerf provides the professional assurance that the benchmark numbers companies generate using its tests "were reviewed, vetted, in some cases even challenged or questioned by other members of the consortium," Salvator said. "It's that sort of peer reviewing process that really brings credibility to these results."

NVIDIA has been steadily focusing on its AI capabilities and applications in recent months. "We are at the iPhone moment for AI," CEO Jensen Huang said during his GTC keynote in March. At that time the company announced its DGX cloud system which portions out slivers of the supercomputer's processing power — specifically by either eight H100 or A100 chips running 60GB of VRAM (640 of memory in total). The company expanded its supercomputing portfolio with the release of DGX GH200 at Computex in May.

This article originally appeared on Engadget at https://www.engadget.com/nvidias-eos-supercomputer-just-broke-its-own-ai-training-benchmark-record-170042546.html?src=rss

Meta reportedly won’t make its AI advertising tools available to political marketers

Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts. At the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser's video content. Reuters reports Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle. 

Meta's decision to bar the use of generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company, "has not yet publicly disclosed the decision in any updates to its advertising standards." TikTok and Snap both ban political ads on their networks, Google employs a "keyword blacklist" to prevent its generative AI advertising tools from straying into political speech and X (formerly Twitter) is, well, you've seen it

Meta does allow for a wide latitude of exceptions to this rule. The tool ban only extends to "misleading AI-generated video in all content, including organic non-paid posts, with an exception for parody or satire," per Reuters. Those exceptions are currently under review by the company's independent Oversight Board as part of a case in which Meta left up an "altered" video of President Biden because, the company argued, it was not generated by an AI.

Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntary commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. Those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and with the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated. 

This article originally appeared on Engadget at https://www.engadget.com/meta-reportedly-wont-make-its-ai-advertising-tools-available-to-political-marketers-010659679.html?src=rss

OpenAI GPTs are customizable AI bots that anyone can create

It’s been nearly a year since ChatGPT’s public debut and its evolution since then has been nothing short of extraordinary. In just over 11 months, OpenAI’s chatbot has gained the ability to write programming code, process information between multiple modalities and expand its reach across the internet with APIs. During OpenAI’s 2023 DevDay keynote address Monday, CEO Sam Altman and other executives took to the stage in San Francisco to unveil the AI chatbot’s latest iteration, ChatGPT-4 Turbo, as well as an exciting new way to bring generative AI technology to everybody, regardless of their coding capability: GPTs!

GPTs are small, task-specific iterations of ChatGPT. Think of them like the single-purpose apps and features on your phone but instead of them maintaining a timer or stop watch, or a digital assistant transcribing your voice instructions into a shopping list, GPTs will do basically anything you train them to. OpenAI offers up eight examples of what GPT’s can be used for — anything from a digital kitchen assistant that suggests recipes based on whats in your pantry to a math mentor to help your kids through their homework to a Sticker Wiz that will “turn your wildest dreams into die-cut stickers, shipped right to your door.”

The new GPTs are an expansion on the company’s existing Custom Instructions feature, which debuted in July. OpenAI notes that many of its power users were already recycling and updating their most effective prompts and instruction sets, a process which GPT-4 Turbo will now handle automatically as part of its update to seed parameters and focus on reproducible outputs. This will allow users a far greater degree of control in customizing the GPTs to their specific needs.

What users won’t need is an extensive understanding of javascript programming. With GPT-4 Turbo’s improved code interpretation, retrieval and function calling capabilities, as well as its massively increased context window size, users will be able to devise and develop their GPTs using nothing but natural language.

Any GPT created by the community will be immediately shareable. For now, that will happen directly between users but later this month, OpenAI plans to launch a centralized storefront where “verified builders” can post and share their GPTs. The most popular ones will climb a leaderboard and, potentially, eventually earn their creators money based on how many people are using the GPT.

GPTs will be available to both regular users and enterprise accounts which, like ChatGPT Enterprise that came out earlier this year, will offer institutional users the chance to create their own internal-only, admin-approved mini-chatbots. These will work with (and are trained on) the company’s specific tasks, department documentation or proprietary datasets. Enterprise GPTs arrive for those customers on Wednesday.

Privacy remains a focal point for the company with additional technical safeguards being put into place, atop existing moderation systems, to prevent people from making GPTs that go against OpenAI’s usage policies. The company is also rolling out an identity verification system for developers to help improve transparency and trust, but did not elaborate on what that process could entail.

This article originally appeared on Engadget at https://www.engadget.com/gpts-are-the-single-application-mini-chatgpt-models-that-anyone-can-create-203311858.html?src=rss

How the meandering legal definition of ‘fair use’ cost us Napster but gave us Spotify

The internet's "enshittification," as veteran journalist and privacy advocate Cory Doctorow describes it, began decades before TikTok made the scene. Elder millennials remember the good old days of Napster — followed by the much worse old days of Napster being sued into oblivion along with Grokster and the rest of the P2P sharing ecosystem, until we were left with a handful of label-approved, catalog-sterilized streaming platforms like Pandora and Spotify. Three cheers for corporate copyright litigation.

In his new book The Internet Con: How to Seize the Means of Computation, Doctorow examines the modern social media landscape, cataloging and illustrating the myriad failings and short-sighted business decisions of the Big Tech companies operating the services that promised us the future but just gave us more Nazis. We have both an obligation and responsibility to dismantle these systems, Doctorow argues, and a means to do so with greater interoperability. In this week's Hitting the Books excerpt, Doctorow examines the aftermath of the lawsuits against P2P sharing services, as well as the role that the Digital Millennium Copyright Act's "notice-and-takedown" reporting system and YouTube's "ContentID" scheme play on modern streaming sites.

The Internet Con cover
Verso Publishing

Excerpted from by The Internet Con: How to Seize the Means of Computation by Cory Doctorow. Published by Verso. Copyright © 2023 by Cory Doctorow. All rights reserved.


Seize the Means of Computation

The harms from notice-and-takedown itself don’t directly affect the big entertainment companies. But in 2007, the entertainment industry itself engineered a new, more potent form of notice-and-takedown that manages to inflict direct harm on Big Content, while amplifying the harms to the rest of us. 

That new system is “notice-and-stay-down,” a successor to notice-and-takedown that monitors everything every user uploads or types and checks to see whether it is similar to something that has been flagged as a copyrighted work. This has long been a legal goal of the entertainment industry, and in 2019 it became a feature of EU law, but back in 2007, notice-and-staydown made its debut as a voluntary modification to YouTube, called “Content ID.” 

Some background: in 2007, Viacom (part of CBS) filed a billion-dollar copyright suit against YouTube, alleging that the company had encouraged its users to infringe on its programs by uploading them to YouTube. Google — which acquired YouTube in 2006 — defended itself by invoking the principles behind Betamax and notice-and-takedown, arguing that it had lived up to its legal obligations and that Betamax established that “inducement” to copyright infringement didn’t create liability for tech companies (recall that Sony had advertised the VCR as a means of violating copyright law by recording Hollywood movies and watching them at your friends’ houses, and the Supreme Court decided it didn’t matter). 

But with Grokster hanging over Google’s head, there was reason to believe that this defense might not fly. There was a real possibility that Viacom could sue YouTube out of existence — indeed, profanity-laced internal communications from Viacom — which Google extracted through the legal discovery process — showed that Viacom execs had been hotly debating which one of them would add YouTube to their private empire when Google was forced to sell YouTube to the company. 

Google squeaked out a victory, but was determined not to end up in a mess like the Viacom suit again. It created Content ID, an “audio fingerprinting” tool that was pitched as a way for rights holders to block, or monetize, the use of their copyrighted works by third parties. YouTube allowed large (at first) rightsholders to upload their catalogs to a blocklist, and then scanned all user uploads to check whether any of their audio matched a “claimed” clip. 

Once Content ID determined that a user was attempting to post a copyrighted work without permission from its rightsholder, it consulted a database to determine the rights holder’s preference. Some rights holders blocked any uploads containing audio that matched theirs; others opted to take the ad revenue generated by that video. 

There are lots of problems with this. Notably, there’s the inability of Content ID to determine whether a third party’s use of someone else’s copyright constitutes “fair use.” As discussed, fair use is the suite of uses that are permitted even if the rightsholder objects, such as taking excerpts for critical or transformational purposes. Fair use is a “fact intensive” doctrine—that is, the answer to “Is this fair use?” is almost always “It depends, let’s ask a judge.” 

Computers can’t sort fair use from infringement. There is no way they ever can. That means that filters block all kinds of legitimate creative work and other expressive speech — especially work that makes use of samples or quotations. 

But it’s not just creative borrowing, remixing and transformation that filters struggle with. A lot of creative work is similar to other creative work. For example, a six-note phrase from Katy Perry’s 2013 song “Dark Horse” is effectively identical to a six-note phrase in “Joyful Noise,” a 2008 song by a much less well-known Christian rapper called Flame. Flame and Perry went several rounds in the courts, with Flame accusing Perry of violating his copyright. Perry eventually prevailed, which is good news for her. 

But YouTube’s filters struggle to distinguish Perry’s six-note phrase from Flame’s (as do the executives at Warner Chappell, Perry’s publisher, who have periodically accused people who post snippets of Flame’s “Joyful Noise” of infringing on Perry’s “Dark Horse”). Even when the similarity isn’t as pronounced as in Dark, Joyful, Noisy Horse, filters routinely hallucinate copyright infringements where none exist — and this is by design. 

To understand why, first we have to think about filters as a security measure — that is, as a measure taken by one group of people (platforms and rightsholder groups) who want to stop another group of people (uploaders) from doing something they want to do (upload infringing material). 

It’s pretty trivial to write a filter that blocks exact matches: the labels could upload losslessly encoded pristine digital masters of everything in their catalog, and any user who uploaded a track that was digitally or acoustically identical to that master would be blocked. 

But it would be easy for an uploader to get around a filter like this: they could just compress the audio ever-so-slightly, below the threshold of human perception, and this new file would no longer match. Or they could cut a hundredth of a second off the beginning or end of the track, or omit a single bar from the bridge, or any of a million other modifications that listeners are unlikely to notice or complain about. 

Filters don’t operate on exact matches: instead, they employ “fuzzy” matching. They don’t just block the things that rights holders have told them to block — they block stuff that’s similar to those things that rights holders have claimed. This fuzziness can be adjusted: the system can be made more or less strict about what it considers to be a match. 

Rightsholder groups want the matches to be as loose as possible, because somewhere out there, there might be someone who’d be happy with a very fuzzy, truncated version of a song, and they want to stop that person from getting the song for free. The looser the matching, the more false positives. This is an especial problem for classical musicians: their performances of Bach, Beethoven and Mozart inevitably sound an awful lot like the recordings that Sony Music (the world’s largest classical music label) has claimed in Content ID. As a result, it has become nearly impossible to earn a living off of online classical performance: your videos are either blocked, or the ad revenue they generate is shunted to Sony. Even teaching classical music performance has become a minefield, as painstakingly produced, free online lessons are blocked by Content ID or, if the label is feeling generous, the lessons are left online but the ad revenue they earn is shunted to a giant corporation, stealing the creative wages of a music teacher.

Notice-and-takedown law didn’t give rights holders the internet they wanted. What kind of internet was that? Well, though entertainment giants said all they wanted was an internet free from copyright infringement, their actions — and the candid memos released in the Viacom case — make it clear that blocking infringement is a pretext for an internet where the entertainment companies get to decide who can make a new technology and how it will function.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-internet-con-cory-doctorow-verso-153018432.html?src=rss

Sony’s ‘GT Sophy’ racing AI is taking all Gran Turismo 7 challengers

Nearly two years after its prototype debut and eight months after its public beta, Sony's GT Sophy racing AI for Gran Turismo 7 is back, and going by Gran Turismo Sophy 2.0 now. It will be available to all PlayStation 5 users as part of the GT7 Spec II Update (Patch Update 1.40) being released on Wednesday, November 2 at 2 a.m. ET. 

We got our first look at the Sophy system back in February 2022. At that point it was already handily beating professional Gran Turismo players. “Gran Turismo Sophy is a significant development in AI whose purpose is not simply to be better than human players, but to offer players a stimulating opponent that can accelerate and elevate the players’ techniques and creativity to the next level,” Sony AI CEO, Hiroaki Kitano, said at the time. “In addition to making contributions to the gaming community, we believe this breakthrough presents new opportunities in areas such as autonomous racing, autonomous driving, high-speed robotics and control.”

The system's public beta this past February saw the AI competing against a small subset of the game's user base in the “Gran Turismo Sophy Race Together” event. Players who had already progressed sufficiently through the game were granted access to the special race, where they faced off against four AI-controlled opponents in a limited number of tracks. 

“The difference [between racers] is that, it's essentially the power you have versus the other cars on the track,” Sony AI's COO, Michael Spranger, told Engadget in February. “You have different levels of performance. In the beginning level, you have a much more powerful vehicle — still within the same class, but you're much faster [than your competition].” That advantage shrank as players advanced through the race rounds and Sophy gained access to increasingly capable vehicles. In September, Sophy learned to drift.

“We have evolved GT Sophy from a research project tackling the grand challenge of creating an AI agent that could outperform top drivers in a top simulation racing game, to a functional game feature that provides all game players a formidable, human-like opponent that enhances the overall racing experience," Spranger said in a press statement released Wednesday.

With Wednesday's announcement, the number of vehicles Sophy can pilot rises from the meager four models available during the beta event, to 340 (yes, three hundred and forty) vehicles across nine unique tracks. Per Sony, that means players can race against GT Sophy in 95 percent of the playable in-game models and the CPU will select its car based on the player's chosen model from their garage (that way they're not randomly facing down a 918 in a Nissan Versa or are otherwise disadvantaged). The five percent of models it can't drive are the handful of hyper-spec specialty cars like the karts or Dodge SRT Tomahawk VGT.

Players can match against Sophy in Quick Race mode (formerly "Arcade") regardless of their advancement through the game or current skill level. As long as you have a PS5, a network connection and the latest update patch installed, you too can get Toretto'ed by a stack of algorithmic processes. Good luck.

This article originally appeared on Engadget at https://www.engadget.com/sony-gt-sophy-racing-ai-gran-turismo-7-ps5-130057992.html?src=rss

Kamala Harris announces AI Safety Institute to protect American consumers

Just days after President Joe Biden unveiled a sweeping executive order retasking the federal government with regards to AI development, Vice President Kamala Harris announced at the UK AI Safety Summit on Tuesday a half dozen more machine learning initiatives that the administration is undertaking. Among the highlights: the establishment of the United States AI Safety Institute, the first release of draft policy guidance on the federal government's use of AI and a declaration on the responsible military applications for the emerging technology.

"President Biden and I believe that all leaders, from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” Harris said in her prepared remarks.

"Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions," she said. The existential threats that generative AI systems present was a central theme of the summit

"To define AI safety we must consider and address the full spectrum of AI risk — threats to humanity as a whole, threats to individuals, to our communities and to our institutions, and threats to our most vulnerable populations," she continued. "To make sure AI is safe, we must manage all these dangers."

To that end, Harris announced Wednesday that the White House, in cooperation with the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) within the NIST. It will be responsible for actually creating and publishing the all of the guidelines, benchmark tests, best practices and such for testing and evaluating potentially dangerous AI systems. 

These tests could include the red-team exercises that President Biden had mentioned in his EO. The AISI would also be tasked in providing technical guidance to lawmakers and law enforcement on a wide range of AI-related topics, including identifying generated content, authenticating live-recorded content, mitigating AI-driven discrimination, and ensuring transparency in its use.

Additionally, the Office of Management and Budget (OMB) is set to release for public comment the administration's first draft policy guidance on government AI use later this week. Like the Blueprint for an AI Bill of Rights that it builds upon, the draft policy guidance outlines steps that the national government can take to "advance responsible AI innovation" while maintaining transparency and protecting federal workers from increased surveillance and job displacement. This draft guidance will eventually be used to establish safeguards for the use of AI in a broad swath of public sector applications including transportation, immigration, health and education so it is being made available for public comment at ai.gov/input.

Harris also announced during her remarks that the Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy the US issued in February has collected 30 signatories to date, all of whom have agreed to a set of norms for responsible development and deployment of military AI systems. Just 165 nations to go! The administration is also launching a a virtual hackathon in efforts to blunt the harm AI-empowered phone and internet scammers can inflict. Hackathon participants will work to build AI models that can counter robocalls and robotexts, especially those targeting elderly folks with generated voice scams.

Content authentication is a growing focus of the Biden-Harris administration. President Biden's EO explained that the Commerce Department will be spearheading efforts to validate content produced by the White House through a collaboration with the C2PA and other industry advocacy groups. They'll work to establish industry norms, such as the voluntary commitments previously extracted from 15 of the largest AI firms in Silicon Valley. In her remarks, Harris extended that call internationally, asking for support from all nations in developing global standards in authenticating government-produced content. 

“These voluntary [company] commitments are an initial step toward a safer AI future, with more to come," she said. "As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies."

"One important way to address these challenges — in addition to the work we have already done — is through legislation — legislation that strengthens AI safety without stifling innovation," Harris continued. 

This article originally appeared on Engadget at https://www.engadget.com/kamala-harris-announces-ai-safety-institute-to-protect-american-consumers-060011065.html?src=rss