Sweeping White House executive order takes aim at AI’s toughest challenges

The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order (EO) seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors.

"The President several months ago directed his team to pull every lever," a senior administration official told reporters on a recent press call. "That's what this order does, bringing the power of the federal government to bear in a wide range of areas to manage AI's risk and harness its benefits ... It stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law."

These actions will be introduced over the next year with smaller safety and security changes happening in around 90 days and with more involved reporting and data transparency schemes requiring nine to 12 months to fully deploy. The administration is also creating an “AI council,” chaired by White House Deputy Chief of Staff Bruce Reed, who will meet with federal agency heads to ensure that the actions are being executed on schedule.

Bruce Reed, Assistant to the President and Deputy Chief of Staff, walks to Marine One behind President Joe Biden, Wednesday, July 6, 2022, in Washington. Biden is traveling to Cleveland to announce a new rule that will allow major new financial support for troubled pensions that cover some 2 million to 3 million workers. (AP Photo/Patrick Semansky)
ASSOCIATED PRESS

Public safety

"In response to the President's leadership on the subject, 15 major American technology companies have begun their voluntary commitments to ensure that AI technology is safe, secure and trustworthy before releasing it to the public," the senior administration official said. "That is not enough."

The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose foundation models might impact national or economic security. Those requirements will also apply in developing AI tools to autonomously implement security fixes on critical software infrastructure. 

By leveraging the Defense Production Act, this EO will "require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests," per a White House press release. That information must be shared prior to the model being made available to to the public, which could help reduce the rate at which companies unleash half-baked and potentially deadly machine learning products.

In addition to the sharing of red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What that does is that creates a space prior to the release… to verify that the system is safe and secure,” officials said.

Administration officials were quick to point out that this reporting requirement will not impact any AI models currently available on the market, nor will it impact independent or small- to medium-size AI companies moving forward, as the threshold for enforcement is quite high. It's geared specifically for the next generation of AI systems that the likes of Google, Meta and OpenAI are already working on with enforcement on models starting at 10^26 petaflops, a capacity currently beyond the limits of existing AI models. "This is not going to catch AI systems trained by graduate students, or even professors,” the administration official said.

What's more, the EO will encourage the Departments of Energy and Homeland Security to address AI threats "to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks," per the release. "Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI." In short, any developers found in violation of the EO can likely expect a prompt and unpleasant visit from the DoE, FDA, EPA or other applicable regulatory agency, regardless of their AI model’s age or processing speed.

In an effort to proactively address the decrepit state of America's digital infrastructure, the order also seeks to establish a cybersecurity program, based loosely on the administration's existing AI Cyber Challenge, to develop AI tools that can autonomously root out and shore up security vulnerabilities in critical software infrastructure. It remains to be seen whether those systems will be able to address the concerns of misbehaving models that SEC head Gary Gensler recently raised.

AI watermarking and cryptographic validation

We're already seeing the normalization of deepfake trickery and AI-empowered disinformation on the campaign trail. So, the White House is taking steps to ensure that the public can trust the text, audio and video content that it publishes on its official channels. The public must be able to easily validate whether the content they see is AI-generated or not, argued White House officials on the press call. 

AI generated image of penguins in a desert with Content Credentials information window open in upper right corner
Adobe

The Department of Commerce is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like the C2PA and its sister organization, the CAI, to develop and implement a watermarking system for federal agencies. “We aim to support and facilitate and help standardize that work [by the C2PA],” administration officials said. “We see ourselves as plugging into that ecosystem.”

Officials further explained that the government is supporting the underlying technical standards and practices that will lead to digital watermarking’ wider adoption — similar to the work it did around developing the HTTPS ecosystem and in getting both developers and the public on-board with it. This will help federal officials achieve their other goal of ensuring that the government's official messaging can be relied upon.

Civil rights and consumer protections

The first Blueprint for an AI Bill of Rights that the White House released last October directed agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people's rights and safety,” the administration official said. “But there's more to do.” 

The new EO will require guidance be extended to “landlords, federal benefits programs and federal contractors” to prevent AI systems from exacerbating discrimination within their spheres of influence. It will also direct the Department of Justice to develop best practices for investigating and prosecuting civil rights violations related to AI, as well as, according to the announcement, “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis."

Additionally, the EO calls for prioritizing federal support to accelerate development of privacy-preserving techniques that would enable future large language models to be trained on large datasets without the current risk of leaking personal details that those datasets might contain. These solutions could include “cryptographic tools that preserve individuals’ privacy,” developed with assistance from the Research Coordination Network and National Science Foundation. The executive order also reiterates its calls for bipartisan legislation from Congress addressing the broader privacy issues that AI systems present for consumers.

In terms of healthcare, the EO states that the Department of Health and Human Services will establish a safety program that tracks and remedies unsafe, AI-based medical practices. Educators will also see support from the federal government in using AI-based educational tools like personalized chatbot tutoring.

Worker protections

The Biden administration concedes that while the AI revolution is a decided boon for business, its capabilities make it a threat to worker security through job displacement and intrusive workplace surveillance. The EO seeks to address these issues with “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”

Trabajadores en huelga en una protesta fuera de Paramount Pictures Studio el miércoles 13 de septiembre de 2023 en Los Angeles. Los estudios de Hollywood abandonaron las negociaciones con el sindicato de actores en huelga. (Foto Richard Shotwell/Invision/AP)
Richard Shotwell/Invision/AP

The EO will also direct the Department of Labor and the Council of Economic Advisors to both study how AI might impact the labor market and how the federal government might better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI might bring to the federal bureaucracy including cutting costs, and increasing cybersecurity efficacy. “There's a lot of opportunity here, but we have to to ensure the responsible government development and deployment of AI,” an administration official said.

To that end, the administration is launching on Monday a new federal jobs portal, AI.gov, which will offer information and guidance on available fellowship programs for folks looking for work with the federal government. “We're trying to get more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA jobs — doing as much as we can to get talent in the door.” The White House is also looking to expand existing immigration rules to streamline visa criteria, interviews and reviews for folks trying to move to and work in the US in these advanced industries.

The White House reportedly did not brief the industry on this particular swath of radical policy changes, though administration officials did note that they had already been collaborating extensively with AI companies on many of these issues. The Senate held its second AI Insight Forum event last week on Capitol Hill, while Vice President Kamala Harris is scheduled to speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak on Tuesday.

WASHINGTON, DC - SEPTEMBER 12: Senate Majority Leader Charles Schumer (D-NY) talk to reporters following the weekly Senate Democratic policy luncheon meeting at the U.S. Capitol on September 12, 2023 in Washington, DC. Schumer was asked about Speaker of the House Kevin McCarthy's announcement of a formal impeachment inquiry into President Joe Biden. (Photo by Chip Somodevilla/Getty Images)
Chip Somodevilla via Getty Images

At an event hosted by The Washington Post on Thursday, Senate Majority Leader Chuck Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which to date, has been slow in coming. 

“There’s probably a limit to what you can do by executive order,” Schumer told WaPo, “They [the Biden Administration] are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”

This article originally appeared on Engadget at https://www.engadget.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655.html?src=rss

What the evolution of our own brains can tell us about the future of AI

The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. 

In this week's Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. 

Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.

It's a picture of a brain with words over it
HarperCollins

Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.


Words Without Inner Worlds

GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:

  • One plus one equals _____

  • Roses are red, violets are _____

You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.

Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.

GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:

  • If 3x + 1 = 3, then x equals _____

  • I am in my windowless basement, and I look toward the sky, and I see _____

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____

Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.

We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.

I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):

  • If 3x + 1 = 3 , then x equals 1

  • I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.

All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.

What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.

Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:

Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?

1) Librarian

2) Construction worker

If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.

The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.

It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.

Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.

A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.

You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.

To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.

The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):

Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:

Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.

And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.

The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss

Leica’s M11-P is a disinformation-resistant camera built for wealthy photojournalists

It's getting to the point these days that we can't even trust our own eyes with the amounts of digital trickery, trolling, misinformation and disinformation dominating social media. Heck, even reputable tech companies are selling us solutions to reimagine historical events. Not Leica, though! The venerated camera company officially announced the hotly-anticipated M11-P on Thursday, its first camera to incorporate the Content Credential secure metadata system.

Content Credentials are the result of efforts by the Content Authenticity Initiative (CAI), "a group of creators, technologists, journalists, and activists leading the global effort to address digital misinformation and content authenticity," and the Coalition for Content Provenance and Authenticity (C2PA), "a formal coalition dedicated exclusively to drafting technical standards and specifications as a foundation for universal content provenance." These intertwined industry advocacy groups created Content Credentials system in response to growing abuse and misuse of generative AI systems in creating and spreading misinformation online. 

"The Leica M11-P launch will advance the CAI’s goal of empowering photographers everywhere to attach Content Credentials to their photographs at the time of capture," Santiago Lyon, Head of Advocacy and Education at CAI, said in a press statement, "creating a chain of authenticity from camera to cloud and enabling photographers to maintain a degree of control over their art, story and context."

"This is the realization of a vision the CAI and our members first set out four years ago, transforming principles of trust and provenance into consumer-ready technology," he continued.

Leica Content Credential
Leica

Content Credentials works by capturing specific metadata about the photograph — the camera used to take it, as well as the location, time and other details about the shot — and locks those in a secure "manifest" that is bundled up with the image itself using a cryptographic key (the process is opt-in for the photog). Those credentials can easily be verified online or on the Leica FOTOS app. Whenever someone subsequently edits that photo, the changes are recorded to an updated manifest, rebundled with the image and updated in the Content Credentials database whenever it is reshared on social media. Users who find these images online can click on the CR icon in the pictures corner to pull up all of this historical manifest information as well, providing a clear chain of providence, presumably, all the way back to the original photographer. The CAI describes Content Credentials as a "nutrition label" for photographs. 

The M11-P itself is exactly what you'd expect from a company that's been at the top of the camera market since the middle of the last century. It offers a 60 MP BSI CMOS sensor on a Maestro-III processor with 256 GB of internal storage. The M11-P is now on sale but it's also $9,480 at retail so, freelancers, sorry.

This article originally appeared on Engadget at https://www.engadget.com/leicas-m11-p-is-a-disinformation-resistant-camera-built-for-wealthy-photojournalists-130032517.html?src=rss

Google updates Maps with a flurry of AI features including ‘Immersive View for routes’

As with all things Google of late, AI capabilities are coming to Maps. The company announced a slew of machine learning updates for the popular app Thursday including an "Immersive View" for route planning, deeper Lens integration for local navigation and more accurate real-time information. 

Back in May at its I/O developer conference, Google executives debuted Immersive View for routes, which provides navigation shots of your planned route. Whether you're on foot, bike, taking public transportation or driving, this will allow you to scrub back and forth through street level, turn-by-turn visuals of the path you're taking. The feature arrives on iOS and Android this week for Amsterdam, Barcelona, Dublin, Florence, Las Vegas, London, Los Angeles, Miami, New York, Paris, San Francisco, San Jose, Seattle, Tokyo and Venice.

Just because you can see the route to get where you're going doesn't guarantee you'll be able to read the signage along the way. Google is revamping its existing AI-based Search with Live View feature in Maps. Simply tap the Lens icon in Maps and wave your phone around, the system will determine your precise street level location and be able to direct you to nearby resources like ATMs, transit stations, restaurants, coffee shops and stores. 

The map itself is set to receive a significant upgrade. Buildings along your route will be more accurately depicted within the app to help you better orient yourself in unfamiliar cities, lane details along tricky highway interchanges will be more clearly defined in-app as well. Those updates will arrive for users in a dozen countries including the US, Canada, France and Germany over the next few months. US users will also start to see better in-app HOV lane designations and European customers should expect a significant expansion of Google's AI speed limit sign reader technology out to 20 nations in total. 

a map of nearby charging stations
Google

Google Maps also runs natively in a growing number of electric vehicles, as part of the Android Automotive OS ecosystem. That Maps is getting an update too as part of the new Places API. Starting this week, drivers will see increased information about nearby charging stations including whether the plugs work with their EV, the power throughput of the charger, and whether the plug has been used recently — an indirect means of inferring whether or not the station is out of service, which Google helpfully points out, is the case around 25 percent of them. 

Even search is improving with the new update. Users will be soon able to look for nearby destinations that meet more esoteric criteria, such as “animal latte art” or “pumpkin patch with my dog,” results of which are gleaned from the analysis of "billions of photos shared by the Google Maps community," per a Google blog post Thursday.   

This article originally appeared on Engadget at https://www.engadget.com/google-maps-update-ai-immersive-view-search-ev-charger-location-130015451.html?src=rss

The US Senate and Silicon Valley reconvene for a second AI Insight Forum

Senator Charles Schumer (D-NY) once again played host to Silicon Valley’s AI leaders on Tuesday as the US Senate reconvened its AI Insights Forum for a second time. On the guest list this go around: manifesto enthusiast Marc Andreessen and venture capitalist John Doerr, as well as Max Tegmark of the Future of Life Institute and NAACP CEO Derrick Johnson. On the agenda: “the transformational innovation that pushes the boundaries of medicine, energy, and science, and the sustainable innovation necessary to drive advancements in security, accountability, and transparency in AI,” according to a release from Sen. Schumer’s office.

Upon exiting the meeting Tuesday, Schumer told the assembled press, "it is clear that American leadership on AI can’t be done on the cheap. Almost all of the experts in today’s Forum called for robust, sustained federal investment in private and public sectors to achieve our goals of American-led transformative and sustainable innovation in AI. 

Per National Security AI Commission estimates, paying for that could cost around $32 billion a year. However, Schumer believes that those funding challenges can be addressed by "leveraging the private sector by employing new and innovative funding mechanisms – like the Grand Challenges prize idea." 

"We must prioritize transformational innovation, to help create new vistas, unlock new cures, improve education, reinforce national security, protect the global food supply, and more," Schumer remarked. But in doing so, we must act sustainably in order to minimize harms to workers, civil society and the environment. "We need to strike a balance between transformational and sustainable innovation," Schumer said. "Finding this balance will be key to our success."

Senators Brian Schatz (D-HI) and John Kennedy (R-LA) also got in on the proposed regulatory action Tuesday, introducing legislation that would provide more transparency on AI-generated content by requiring clear labeling and disclosures. Such technology could resemble the Content Credentials tag that the C2PA and CAI industry advocacy groups are developing.

"Our bill is simple," Senator Schatz said in a press statement. "If any content is made by artificial intelligence, it should be labeled so that people are aware and aren’t fooled or scammed.”

The Schatz-Kennedy AI Labeling Act, as they're calling it, would require generative AI system developers to clearly and conspicuously disclose AI-generated content to users. Those developers, and their licensees, would also have to take "reasonable steps" to prevent "systematic publication of content without disclosures." The bill would also establish a working group to create non-binding technical standards to help social media platforms automatically identify such content as well.

“​​It puts the onus where it belongs: on the companies and not the consumers,” Schatz said on the Senate floor Tuesday. “Labels will help people to be informed. They will also help companies using AI to build trust in their content.”

Tuesday’s meeting follows the recent introduction of new AI legislation, dubbed the Artificial Intelligence Advancement Act of 2023 (S. 3050). Senators Martin Heinrich (D-NM), Mike Rounds (R-SD), Charles Schumer (D-NY) and Todd Young (R-IN) all co-sponsored the bill. The bill proposes AI bug bounty programs and would require a vulnerability analysis study for AI-enabled military applications. It’s passage into law would also launch a report into AI regulation in the financial services industry (which the head of the SEC had recently been lamenting) as well as a second report on data sharing and coordination.

“It’s frankly a hard challenge,” SEC Chairman Gary Gensler told The Financial Times recently, speaking on the challenges the financial industry faces in AI adoption and regulation. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do.”

"Working people are fighting back against artificial intelligence and other technology used to eliminate workers or undermine and exploit us," AFL-CIO President Liz Shuler said at the conclusion of Tuesday's forum. "If we fail to involve workers and unions across the entire innovation process, AI will curtail our rights, threaten good jobs and undermine our democracy. But the responsible adoption of AI, properly regulated, has the potential to create opportunity, improve working conditions and build prosperity."

The forums are part of Senator Schumer’s SAFE Innovation Framework, which his office debuted in June. “The US must lead in innovation and write the rules of the road on AI and not let adversaries like the Chinese Communist Party craft the standards for a technology set to become as transformative as electricity,” the program announcement reads.

While Andreesen calls for AI advancement at any cost and Tegmark continues to advocate for a developmental “time out,” rank and file AI industry workers are also fighting to make their voices heard ahead of the forum. On Monday, a group of employees from two dozen leading AI firms published an open letter to Senator Schumer, demanding Congress take action to safeguard their livelihoods from the “dystopian future” that Andreessen’s screed, for example, would require.

“Establishing robust protections related to workplace technology and rebalancing power between workers and employers could reorient the economy and tech innovation toward more equitable and sustainable outcomes,” the letter authors argue.

Senator Ed Markey (D-MA) and Representative Pramila Jayapal (WA-07) had, the previous month, called on leading AI companies to “answer for the working conditions of their data workers, laborers who are often paid low wages and provided no benefits but keep AI products online.”

"We covered a lot of good ground today, and I think we’ll all be walking out of the room with a deeper understanding of how to approach American-led AI innovation," Schumer said Tueseay. "We’ll continue this conversation in weeks and months to come – in more forums like this and committee hearings in Congress – as we work to develop comprehensive, bipartisan AI legislation."

This article originally appeared on Engadget at https://www.engadget.com/the-us-senate-and-silicon-valley-reconvene-for-a-second-ai-insight-forum-143128622.html?src=rss

The US Senate and Silicon Valley reconvene for a second AI Insight Forum

Senator Charles Schumer (D-NY) once again played host to Silicon Valley’s AI leaders on Tuesday as the US Senate reconvened its AI Insights Forum for a second time. On the guest list this go around: manifesto enthusiast Marc Andreessen and venture capitalist John Doerr, as well as Max Tegmark of the Future of Life Institute and NAACP CEO Derrick Johnson. On the agenda: “the transformational innovation that pushes the boundaries of medicine, energy, and science, and the sustainable innovation necessary to drive advancements in security, accountability, and transparency in AI,” according to a release from Sen. Schumer’s office.

Upon exiting the meeting Tuesday, Schumer told the assembled press, "it is clear that American leadership on AI can’t be done on the cheap. Almost all of the experts in today’s Forum called for robust, sustained federal investment in private and public sectors to achieve our goals of American-led transformative and sustainable innovation in AI. 

Per National Security AI Commission estimates, paying for that could cost around $32 billion a year. However, Schumer believes that those funding challenges can be addressed by "leveraging the private sector by employing new and innovative funding mechanisms – like the Grand Challenges prize idea." 

"We must prioritize transformational innovation, to help create new vistas, unlock new cures, improve education, reinforce national security, protect the global food supply, and more," Schumer remarked. But in doing so, we must act sustainably in order to minimize harms to workers, civil society and the environment. "We need to strike a balance between transformational and sustainable innovation," Schumer said. "Finding this balance will be key to our success."

Senators Brian Schatz (D-HI) and John Kennedy (R-LA) also got in on the proposed regulatory action Tuesday, introducing legislation that would provide more transparency on AI-generated content by requiring clear labeling and disclosures. Such technology could resemble the Content Credentials tag that the C2PA and CAI industry advocacy groups are developing.

"Our bill is simple," Senator Schatz said in a press statement. "If any content is made by artificial intelligence, it should be labeled so that people are aware and aren’t fooled or scammed.”

The Schatz-Kennedy AI Labeling Act, as they're calling it, would require generative AI system developers to clearly and conspicuously disclose AI-generated content to users. Those developers, and their licensees, would also have to take "reasonable steps" to prevent "systematic publication of content without disclosures." The bill would also establish a working group to create non-binding technical standards to help social media platforms automatically identify such content as well.

“​​It puts the onus where it belongs: on the companies and not the consumers,” Schatz said on the Senate floor Tuesday. “Labels will help people to be informed. They will also help companies using AI to build trust in their content.”

Tuesday’s meeting follows the recent introduction of new AI legislation, dubbed the Artificial Intelligence Advancement Act of 2023 (S. 3050). Senators Martin Heinrich (D-NM), Mike Rounds (R-SD), Charles Schumer (D-NY) and Todd Young (R-IN) all co-sponsored the bill. The bill proposes AI bug bounty programs and would require a vulnerability analysis study for AI-enabled military applications. It’s passage into law would also launch a report into AI regulation in the financial services industry (which the head of the SEC had recently been lamenting) as well as a second report on data sharing and coordination.

“It’s frankly a hard challenge,” SEC Chairman Gary Gensler told The Financial Times recently, speaking on the challenges the financial industry faces in AI adoption and regulation. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do.”

"Working people are fighting back against artificial intelligence and other technology used to eliminate workers or undermine and exploit us," AFL-CIO President Liz Shuler said at the conclusion of Tuesday's forum. "If we fail to involve workers and unions across the entire innovation process, AI will curtail our rights, threaten good jobs and undermine our democracy. But the responsible adoption of AI, properly regulated, has the potential to create opportunity, improve working conditions and build prosperity."

The forums are part of Senator Schumer’s SAFE Innovation Framework, which his office debuted in June. “The US must lead in innovation and write the rules of the road on AI and not let adversaries like the Chinese Communist Party craft the standards for a technology set to become as transformative as electricity,” the program announcement reads.

While Andreesen calls for AI advancement at any cost and Tegmark continues to advocate for a developmental “time out,” rank and file AI industry workers are also fighting to make their voices heard ahead of the forum. On Monday, a group of employees from two dozen leading AI firms published an open letter to Senator Schumer, demanding Congress take action to safeguard their livelihoods from the “dystopian future” that Andreessen’s screed, for example, would require.

“Establishing robust protections related to workplace technology and rebalancing power between workers and employers could reorient the economy and tech innovation toward more equitable and sustainable outcomes,” the letter authors argue.

Senator Ed Markey (D-MA) and Representative Pramila Jayapal (WA-07) had, the previous month, called on leading AI companies to “answer for the working conditions of their data workers, laborers who are often paid low wages and provided no benefits but keep AI products online.”

"We covered a lot of good ground today, and I think we’ll all be walking out of the room with a deeper understanding of how to approach American-led AI innovation," Schumer said Tueseay. "We’ll continue this conversation in weeks and months to come – in more forums like this and committee hearings in Congress – as we work to develop comprehensive, bipartisan AI legislation."

This article originally appeared on Engadget at https://www.engadget.com/the-us-senate-and-silicon-valley-reconvene-for-a-second-ai-insight-forum-143128622.html?src=rss

Adult film star Riley Reid launches Clona.AI, a sexting chatbot platform

Adult film icon and media investor Riley Reid aims to bring the transformational capabilities of generative AI to adult entertainment with an online platform where users can chat with digital versions of content creators. But unlike other, scuzzier adult chatbots, Clona.AI’s avatars are trained with explicit consent of the models’ creators who have direct input in what the “AI companions” will, and won’t, talk about.

For $30 a month, fans and subscribers will be able to hold “intimate conversations” with digital versions of their favorite adult stars, content creators and influencers. The site’s roster currently includes Reid herself and Lena the Plug. A free tier is also available but offers just five chat messages per month. 

“The reality is, AI is coming, and if it's not Clona, it’s somebody else,” Reid told 404 Media. “When [other people] use deepfakes or whatever — if I'm not partnering up with it, then someone else is going to steal my likeness and do it without me. So being presented with this opportunity, I was so excited because I felt like I had a chance to be a part of society's technological advances.”

Clona uses Meta’s Llama 2 large language model as a base, then heavily refines and retrains it to reflect the personality of the person it’s based on. Reid explains that her model was first trained on a variety of her online media including interviews, podcast appearances and YouTube videos (in addition to some of her x-rated work) before further fine tuning its response by having the AI chat with Reid herself.

“I’ll be able to see how it responds to users, and edit it to be like ‘no, I would have said it more like this,’’’ Reid said. “But in the beginning my focus was on things like making sure it had my dogs’ names right, making sure I was fact-checking it.”

While the AI companion will be capable talking dirty, how dirty that gets depends on the actor’s preferences, not the user’s. Reid notes that her model, for example, will not discuss physically dangerous sex acts with users. "I don't know if the tech team thought about the sounding guys, but I was like, I thought about them,” she said.

Generative AI technology has shown tremendous potential in creating digital clones of deceased celebrities and recording artists. The process requires little more than the celeb’s permission (or that of their estate) and a sufficiently large corpus of their vocal or video recordings. However, we’ve already also seen that technology be misused in deepfake pornography and shady dental advertising. Unscrupulous data scraping practices on the public web (data which is then used to train LLMs) has also raised difficult questions regarding modern copyright laws, copyright infringement and Grammy award eligibility.

Still, Reid remains optimistic about the historically proven resilience of the sex industry. “I feel like we're gonna be a huge part of AI adapting into our society, because porn is always like that,” Reid said. “It’s what it did with the internet. And the porn world has seen so many advances in technology.”

This article originally appeared on Engadget at https://www.engadget.com/adult-film-star-riley-reid-launches-clonaai-a-sexting-chatbot-platform-000509221.html?src=rss

Adult film star Riley Reid launches Clona.AI, a sexting chatbot platform

Adult film icon and media investor Riley Reid aims to bring the transformational capabilities of generative AI to adult entertainment with an online platform where users can chat with digital versions of content creators. But unlike other, scuzzier adult chatbots, Clona.AI’s avatars are trained with explicit consent of the models’ creators who have direct input in what the “AI companions” will, and won’t, talk about.

For $30 a month, fans and subscribers will be able to hold “intimate conversations” with digital versions of their favorite adult stars, content creators and influencers. The site’s roster currently includes Reid herself and Lena the Plug. A free tier is also available but offers just five chat messages per month. 

“The reality is, AI is coming, and if it's not Clona, it’s somebody else,” Reid told 404 Media. “When [other people] use deepfakes or whatever — if I'm not partnering up with it, then someone else is going to steal my likeness and do it without me. So being presented with this opportunity, I was so excited because I felt like I had a chance to be a part of society's technological advances.”

Clona uses Meta’s Llama 2 large language model as a base, then heavily refines and retrains it to reflect the personality of the person it’s based on. Reid explains that her model was first trained on a variety of her online media including interviews, podcast appearances and YouTube videos (in addition to some of her x-rated work) before further fine tuning its response by having the AI chat with Reid herself.

“I’ll be able to see how it responds to users, and edit it to be like ‘no, I would have said it more like this,’’’ Reid said. “But in the beginning my focus was on things like making sure it had my dogs’ names right, making sure I was fact-checking it.”

While the AI companion will be capable talking dirty, how dirty that gets depends on the actor’s preferences, not the user’s. Reid notes that her model, for example, will not discuss physically dangerous sex acts with users. "I don't know if the tech team thought about the sounding guys, but I was like, I thought about them,” she said.

Generative AI technology has shown tremendous potential in creating digital clones of deceased celebrities and recording artists. The process requires little more than the celeb’s permission (or that of their estate) and a sufficiently large corpus of their vocal or video recordings. However, we’ve already also seen that technology be misused in deepfake pornography and shady dental advertising. Unscrupulous data scraping practices on the public web (data which is then used to train LLMs) has also raised difficult questions regarding modern copyright laws, copyright infringement and Grammy award eligibility.

Still, Reid remains optimistic about the historically proven resilience of the sex industry. “I feel like we're gonna be a huge part of AI adapting into our society, because porn is always like that,” Reid said. “It’s what it did with the internet. And the porn world has seen so many advances in technology.”

This article originally appeared on Engadget at https://www.engadget.com/adult-film-star-riley-reid-launches-clonaai-a-sexting-chatbot-platform-000509221.html?src=rss

Qualcomm brings on-device AI to mobile and PC

Qualcomm is no stranger in running artificial intelligence and machine learning systems on-device and without an internet connection. They’ve been doing it with their camera chipsets for years. But on Tuesday at Snapdragon Summit 2023, the company announced that on-device AI is finally coming to mobile devices and Windows 11 PCs as part of the new Snapdragon 8 Gen 3 and X Elite chips.

Both chipsets were built from the ground up with generative AI capabilities in mind and are able to support a variety of large language models (LLM), language vision models (LVM), and transformer network-based automatic speech recognition (ASR) models, up to 10 billion parameters for the SD8 gen 3 and 13 billion parameters for the X Elite, entirely on-device. That means you’ll be able to run anything from Baidu’s ERNIE 3.5 to OpenAI’s Whisper, Meta's Llama 2 or Google’s Gecko on your phone or laptop, without an internet connection. Qualcomm’s chips are optimized for voice, text and image inputs.

“It's important to have a wide array of support underneath the hood for these models to be running and therefore heterogeneous compute is extremely important,” Durga Malladi, SVP & General Manager, Technology Planning & Edge Solutions at Qualcomm, told reporters at a prebriefing last week. “We have state-of-the-art CPU, GPU, and NPU (Neural Processing Unit) processors that are used concurrently, as multiple models are running at any given point in time.”

The Qualcomm AI Engine is comprised of the Oryon CPU, the Adreno GPU and Hexagon NPU. Combined, they handle up to 45 TOPS (trillions of operations per second) and can crunch 30 tokens per second on laptops, 20 tokens per second on mobile devices — tokens being the basic text/data unit that LLMs can process/generate off of. The chipsets use Samsung’s 4.8GHz LP-DDR5x DRAM for their memory allocation.

a summary screen of the various new features offered on X Elite and SD8 gen 3 chips, same as what's discussed in the text.
Qualcomm

“Generative AI has demonstrated the ability to take very complex tasks, solve them and resolve them in a very efficient manner,” he continued. Potential use cases could include meeting and document summarization or email drafting for consumers, and prompt-based computer code or music generation for enterprise applications, Malladi noted.

Or you could just use it to take pretty pictures. Qualcomm is integrating its previous work with edge AI, Cognitive ISP. Devices using these chipsets will be able to edit photos in real-time and in as many as 12 layers. They'll also be able to capture clearer images in low light, remove unwanted objects from photos (a la Google’s Magic Eraser) or expand image backgrounds. User scan even watermark their shots as being real and not AI generated, using Truepic photo capture.

Having an AI that lives primarily on your phone or mobile device, rather than in the cloud, will offer users myriad benefits over the current system. Much like enterprise AIs that take a general model (e.g. GPT-4) and tune it using a company’s internal data to provide more accurate and on-topic answers, a locally-stored AI “over time… gradually get personalized,” Malladi said, “in the sense that… the assistant gets smarter and better, running on the device in itself.”

What’s more, the inherent delay present when the model has to query the cloud for processing or information doesn’t exist when all of the assets are local. As such, both the X Elite and SD8 gen 3 are capable of not only running Stable Diffusion on-device but generating images in less than 0.6 seconds.

The capacity to run bigger and more capable models, and interact with those models using our speaking words instead of our typing words, could ultimately prove the biggest boon to consumers. “There's a very unique way in which we start interfacing the devices and voice becomes a far more natural interface towards these devices — as well in addition to everything else,” Malladi said. “We believe that it has the potential to be a transformative moment, where we start interacting with devices in a very different way compared to what we've done before.”

Mobile devices and PCs are just the start for Qualcomm’s on-device AI plans. The 10-13 billion parameter limit is already moving towards 20 billion-plus parameters as the company develops new chip iterations. “These are very sophisticated models,” Malladi commented. “The use cases that you build on this are quite impressive.”

“When you start thinking about ADAS (Advanced Driver Assistance Systems) and you have multi-modality [data] coming in from multiple cameras, IR sensors, radar, lidar — in addition to voice, which is the human that is inside the vehicle in itself,” he continued. “The size of that model is pretty large, we're talking about 30 to 60 billion parameters already.” Eventually, these on-device models could approach 100 billion parameters or more, according to Qualcomm’s estimates.

This article originally appeared on Engadget at https://www.engadget.com/qualcomm-brings-on-device-ai-to-mobile-and-pc-190030938.html?src=rss

Qualcomm brings on-device AI to mobile and PC

Qualcomm is no stranger in running artificial intelligence and machine learning systems on-device and without an internet connection. They’ve been doing it with their camera chipsets for years. But on Tuesday at Snapdragon Summit 2023, the company announced that on-device AI is finally coming to mobile devices and Windows 11 PCs as part of the new Snapdragon 8 Gen 3 and X Elite chips.

Both chipsets were built from the ground up with generative AI capabilities in mind and are able to support a variety of large language models (LLM), language vision models (LVM), and transformer network-based automatic speech recognition (ASR) models, up to 10 billion parameters for the SD8 gen 3 and 13 billion parameters for the X Elite, entirely on-device. That means you’ll be able to run anything from Baidu’s ERNIE 3.5 to OpenAI’s Whisper, Meta's Llama 2 or Google’s Gecko on your phone or laptop, without an internet connection. Qualcomm’s chips are optimized for voice, text and image inputs.

“It's important to have a wide array of support underneath the hood for these models to be running and therefore heterogeneous compute is extremely important,” Durga Malladi, SVP & General Manager, Technology Planning & Edge Solutions at Qualcomm, told reporters at a prebriefing last week. “We have state-of-the-art CPU, GPU, and NPU (Neural Processing Unit) processors that are used concurrently, as multiple models are running at any given point in time.”

The Qualcomm AI Engine is comprised of the Oryon CPU, the Adreno GPU and Hexagon NPU. Combined, they handle up to 45 TOPS (trillions of operations per second) and can crunch 30 tokens per second on laptops, 20 tokens per second on mobile devices — tokens being the basic text/data unit that LLMs can process/generate off of. The chipsets use Samsung’s 4.8GHz LP-DDR5x DRAM for their memory allocation.

a summary screen of the various new features offered on X Elite and SD8 gen 3 chips, same as what's discussed in the text.
Qualcomm

“Generative AI has demonstrated the ability to take very complex tasks, solve them and resolve them in a very efficient manner,” he continued. Potential use cases could include meeting and document summarization or email drafting for consumers, and prompt-based computer code or music generation for enterprise applications, Malladi noted.

Or you could just use it to take pretty pictures. Qualcomm is integrating its previous work with edge AI, Cognitive ISP. Devices using these chipsets will be able to edit photos in real-time and in as many as 12 layers. They'll also be able to capture clearer images in low light, remove unwanted objects from photos (a la Google’s Magic Eraser) or expand image backgrounds. User scan even watermark their shots as being real and not AI generated, using Truepic photo capture.

Having an AI that lives primarily on your phone or mobile device, rather than in the cloud, will offer users myriad benefits over the current system. Much like enterprise AIs that take a general model (e.g. GPT-4) and tune it using a company’s internal data to provide more accurate and on-topic answers, a locally-stored AI “over time… gradually get personalized,” Malladi said, “in the sense that… the assistant gets smarter and better, running on the device in itself.”

What’s more, the inherent delay present when the model has to query the cloud for processing or information doesn’t exist when all of the assets are local. As such, both the X Elite and SD8 gen 3 are capable of not only running Stable Diffusion on-device but generating images in less than 0.6 seconds.

The capacity to run bigger and more capable models, and interact with those models using our speaking words instead of our typing words, could ultimately prove the biggest boon to consumers. “There's a very unique way in which we start interfacing the devices and voice becomes a far more natural interface towards these devices — as well in addition to everything else,” Malladi said. “We believe that it has the potential to be a transformative moment, where we start interacting with devices in a very different way compared to what we've done before.”

Mobile devices and PCs are just the start for Qualcomm’s on-device AI plans. The 10-13 billion parameter limit is already moving towards 20 billion-plus parameters as the company develops new chip iterations. “These are very sophisticated models,” Malladi commented. “The use cases that you build on this are quite impressive.”

“When you start thinking about ADAS (Advanced Driver Assistance Systems) and you have multi-modality [data] coming in from multiple cameras, IR sensors, radar, lidar — in addition to voice, which is the human that is inside the vehicle in itself,” he continued. “The size of that model is pretty large, we're talking about 30 to 60 billion parameters already.” Eventually, these on-device models could approach 100 billion parameters or more, according to Qualcomm’s estimates.

This article originally appeared on Engadget at https://www.engadget.com/qualcomm-brings-on-device-ai-to-mobile-and-pc-190030938.html?src=rss