The White House lays out extensive AI guidelines for the federal government

It's been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government's use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.

"I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits," Vice President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use "do not endanger the rights and safety of the American people." They have until December 1 to make sure they have in place "concrete safeguards" to make sure that AI systems they're employing don't impact Americans' safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an "unacceptable" impact on critical operations.

Impact on Americans' rights and safety

Per the policy, an AI system is deemed to impact safety if it "is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of" certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in "a workplace, school, housing, transportation, medical or law enforcement setting."

Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and "replicating a person’s likeness or voice without express consent."

When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to "establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk."

Transparency requirements

The second requirement will force agencies to be transparent about the AI systems they're using. "Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed," Harris said. 

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won't harm the public or government operations. If an agency can't disclose specific AI use cases for sensitivity reasons, they'll still have to report metrics

Vice President Kamala Harris delivers remarks during a campaign event with President Joe Biden in Raleigh, N.C., Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)
ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency's use of AI. "This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use," Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.

"AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity," OMB Director Shalanda Young told reporters. "When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services."

This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal bills in the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss

House passes bill that would bar data brokers from selling Americans’ personal information to ‘adversary’ countries

The House of Representatives approved a measure targeting data brokers’ ability to sell Americans’ personal data to “adversary” countries, like Russia, China, Iran and North Korea. The Protecting Americans’ Data from Foreign Adversaries Act passed with a unanimous 414 - 0 vote.

The bill, which was introduced alongside a measure that could force a ban or sale of TikTok, would prohibit data brokers from selling Americans’ “sensitive” data to people or entities in “adversary” countries. Much like a recent executive order from President Joe Biden targeting data brokers, the bill specifically covers geolocation, financial, health, and biometric data, as well as other private information like text logs and phone call history.

If passed — the bill will need Senate approval before landing on Biden's desk — it would represent a significant check on the relatively unregulated data broker industry. US officials have previously warned that China and other geopolitical rivals of the United States have already acquired vast troves of Americans’ information from brokers and privacy advocates have long urged lawmakers to regulate the multibillion-dollar industry.

The bill is the second major piece of bipartisan legislation to come out of the House Energy and Commerce this month. The committee previously introduced the “Protecting Americans from Foreign Adversary Controlled Applications Act,” which would require TikTok to divest itself from parent company ByteDance or face a ban in the US. In a statement, Representatives Frank Pallone and Cathy McMorris Rodgers, said that the latest bill “builds” on their work to pass the measure targeting TikTok. “Today’s overwhelming vote sends a clear message that we will not allow our adversaries to undermine American national security and individual privacy by purchasing people’s personally identifiable sensitive information from data brokers,” they said.

This article originally appeared on Engadget at https://www.engadget.com/house-passes-bill-that-would-bar-data-brokers-from-selling-americans-personal-information-to-adversary-countries-004735748.html?src=rss

The EPA reveals final auto industry regulations to try to keep the world habitable

The Environmental Protection Agency (EPA) unveiled its final pollution emissions standards for the auto industry on Wednesday. The regulations, which include a looser timeframe than those proposed last year, mandate that by 2032, most new passenger car and light truck sales in the US must be electric or hybrid.

Earth is on a disastrous trajectory with climate change, and no amount of baseless conspiracy theories or talking points from the oil and gas industry, Donald Trump or anyone else will change that. Only phasing out fossil fuels and emissions will beat back its worst effects. The Biden Administration’s EPA is trying to do that — while throwing a bone to stakeholders like unions and automakers to navigate the landmines of today’s political realities.

The final rules present a timeline to wind down gas-powered vehicle purchases, making most US auto sales fully electric, hybrid, plug-in hybrid or advanced gasoline by 2032. The transition begins in 2027 but moderates the pace until after 2030. That’s a key change from last April’s proposed standards, which called for EVs to make up two-thirds of vehicle sales by 2032.

The shift was an election-year compromise for Biden, who has to balance the crucial battle against climate change with 2024 auto union endorsements. Labor unions had pushed for the more relaxed pace out of fears that a more aggressive transition, like the EPA proposed last year, would lead to job losses. EVs typically require fewer assembly workers than traditional gas-powered vehicles.

Last year, United Auto Workers (UAW) President Shawn Fain withheld support for Biden’s reelection due to concerns about the EV transition. But (perhaps after hearing assurances about the revised rules) the UAW endorsed his reelection bid in January.

“The EPA has made significant progress on its final greenhouse gas emissions rule for light-duty vehicles,” the UAW wrote in a statement about the new rules published by the EPA. “By taking seriously the concerns of workers and communities, the EPA has come a long way to create a more feasible emissions rule that protects workers building ICE vehicles, while providing a path forward for automakers to implement the full range of automotive technologies to reduce emissions.”

Contrary to what online misinformation or your uncle may tell you, the rules — aimed at the auto industry and not consumers — don't make gas-powered cars and trucks illegal. Instead, they require automakers to meet specific emissions standards throughout their product lines. The rules apply to new vehicle sales, not used ones.

The EPA says the final rule will lead to $99 billion in benefits and save the average American driver $6,000 in fuel and maintenance over the life of their vehicles. Other advantages include avoiding 7.2 billion additional tons of CO2 emissions through 2055 and offering “nearly $100 billion of annual net benefits to society.” The reduction in fine particulate matter and ozone will allegedly prevent up to 2,500 premature deaths in 2055 while reducing associated health problems like heart attacks, asthma and other respiratory illnesses.

“Three years ago, I set an ambitious target: that half of all new cars and trucks sold in 2030 would be zero-emission,” President Biden wrote in a statement supplied by The White House to Engadget. “I brought together American automakers. I brought together American autoworkers. Together, we’ve made historic progress. Hundreds of new expanded factories across the country. Hundreds of billions in private investment and thousands of good-paying union jobs. And we’ll meet my goal for 2030 and race forward in the years ahead. Today, we’re setting new pollution standards for cars and trucks. U.S. workers will lead the world on autos making clean cars and trucks, each stamped ‘Made in America.’”

This article originally appeared on Engadget at https://www.engadget.com/the-epa-reveals-final-auto-industry-regulations-to-try-to-keep-the-world-habitable-195612588.html?src=rss

Uber and Lyft are quitting Minneapolis over a driver pay increase

Uber and Lyft plan to end operations in Minneapolis after the city council voted to increase driver pay. The council passed an ordinance on the issue last week. On Thursday, it voted to overrule a mayoral veto of the measure.

The new rules stipulate that ridesharing companies need to pay drivers at least $1.40 per mile and 51 cents per minute (or $5 a ride, whichever is higher) whenever they're ferrying a passenger. Tips are on top of the minimum pay. According to the Associated Press, the council passed the ordinance to bring driver pay closer to the local minimum wage of $15.57 an hour.

However, Uber and Lyft say they'll end services in the city before the pay rise takes effect on May 1. Lyft says the increase is "deeply flawed," citing a Minnesota study indicating that drivers could meet the minimum wage and still cover health insurance, paid leave and retirement savings at lower rates of $1.21 per mile and 49 cents per minute. “We support a minimum earning standard for drivers, but it should be done in an honest way that keeps the service affordable for riders," spokesperson CJ Macklin told The Verge.

An Uber spokesperson told the publication that the company was disappointed by the council's choice to "ignore the data and kick Uber out of the Twin Cities,” putting around 10,000 drivers out of work. They noted Uber's confidence that by working with drivers, drivers and legislators, “we can achieve comprehensive statewide legislation that guarantees drivers a fair minimum wage, protects their independence and keeps rideshare affordable.”

However, Minnesota Governor Tim Walz last year vetoed a bill to boost wages for Uber and Lyft drivers, citing concern over the state becoming one of the most expensive places in the country for ridesharing. Other jurisdictions have mandated minimum driver pay for ridesharing services, including New York City, where the rate starts at about $18 per hour.

If Uber and Lyft follow through on their threat to quit Minneapolis, that could make it harder for people (particularly folks with disabilities and those who can't afford a car of their own) to get around. The rise of ridesharing has upended the taxi industry over the last decade or so. As such, a Minneapolis official says there are now just 39 licensed cab drivers in the city, a significant drop from 1,948 licensed drivers in January 2014.

Meanwhile, some upstart ridesharing companies are looking to move in and take over from Lyft and Uber. Empower and Wridz, for instance, have shown interest in starting operations in Minneapolis. Both companies ask drivers to pay a monthly subscription fee to use their platforms and find riders. In return, drivers keep the entire fare.

This article originally appeared on Engadget at https://www.engadget.com/uber-and-lyft-are-quitting-minneapolis-over-a-driver-pay-increase-180041427.html?src=rss

TikTok is now asking users to call their Senators to prevent a US ban

One day after a bill that could lead to a ban of TikTok in the United States passed the House of Representatives, the company is doubling down on its strategy of urging users to call lawmakers. The app began pushing new in-app messages to users asking them to "tell your Senator how important TikTok is to you” and to “ask them to vote not on the TikTok ban.”

The new alerts are the second such message TikTok has pushed to users about the bill. Prior to the House vote, the company prompted users to call their representatives in the House. The step may have backfired as lawmakers accused the company of trying to “interfere” with the legislative process as Congressional offices were reportedly overwhelmed with calls, many of which came from somewhat confused teenagers.

The latest notifications are even more direct. “The House of Representatives just voted to ban TikTok, which impacts 170 million Americans just like you,” it says. “Now, if the Senate votes, the future of creativity and communities you love on TikTok could be shut down.” Like the previous alerts, users can choose to “call now,” and the app will find phone numbers if a zip code is provided.

TikTok is pushing new in-app messages urging users to call lawmakers.
Screenshot via TikTok

TikTok didn't immediately respond to a request for comment. But the message underscores just how big a threat the “Protecting Americans from Foreign Adversary Controlled Applications Act” is to the company. If passed, TikTok would have about six months to sell itself or face a ban in the US. Though there have been several previous attempts to ban the app or force a sale, no measure has received as much bipartisan support so quickly. If passed by the Senate, President Joe Biden has said he would sign it into law.

TikTok CEO Shou Chew has also appealed directly to users, telling them to “protect your constitutional rights” and promising that the company would “do all we can including exercising our legal rights to protect this amazing platform.”

This article originally appeared on Engadget at https://www.engadget.com/tiktok-is-now-asking-users-to-call-their-senators-to-prevent-a-us-ban-213935787.html?src=rss

TikTok’s CEO urges users to ‘protect your constitutional rights’ as US ban looms

Hours after the House passed a bill that could ban TikTok in the United States, Shou Chew, the company’s CEO urged users to “protect your constitutional rights.” Chew also implied that TikTok would mount a legal challenge if the bill is passed into law.

“We will not stop fighting and advocating for you,” Chew said in a video posted to X. “We will continue to do all we can including exercising our legal rights to protect this amazing platform that we have built with you.” He also asked TikTok users in the US to share their stories with friends, families, and senators. “This legislation, if passed into law, will lead to a ban of TikTok in the United States,” Chew said. “Even the bill’s sponsors admit that’s their goal.”

The bill, known as the “Protecting Americans from Foreign Adversary Controlled Applications Act” passed the House on Wednesday with bipartisan support just days after it was introduced. Should the bill pass into law, it would force TikTok’s parent company ByteDance, a Chinese corporation, to sell TikTok to a US company within six months, or be banned from US app stores and web hosting services. TikTok has challenged state-level bans in the past. Last year, TikTok sued Montana, which banned the app in the state. A federal judge temporarily blocked that ban in November before it went into effect.

Last week, TikTok sent push notifications to the app’s more than 170 million users in the US urging them to call their representatives about the potential ban. “Speak up now — before your government strips 170 million Americans of their Constitutional right to free expression,” the notification said. The wave of notifications reportedly led to House staffers being inundated with calls from high schoolers asking what a Congressman is. Lawmakers criticized the company they perceived as trying to “interfere” with the legislative process.

In his appeal, Chew said that banning TikTok would give “more power to a handful of other social media companies.” Former President Donald Trump, who once tried to force ByteDance to sell TikTok in the US, recently expressed a similar sentiment, claiming that banning TikTok would strengthen Meta whose platform, Reels, competes with TikTok directly. Chew also added that taking TikTok away would also hurt hundreds of thousands of American jobs, creators, and small businesses.

This article originally appeared on Engadget at https://www.engadget.com/tiktoks-ceo-urges-users-to-protect-your-constitutional-rights-as-us-ban-looms-002806276.html?src=rss

TikTok’s CEO urges users to ‘protect your constitutional rights’ as US ban looms

Hours after the House passed a bill that could ban TikTok in the United States, Shou Chew, the company’s CEO urged users to “protect your constitutional rights.” Chew also implied that TikTok would mount a legal challenge if the bill is passed into law.

“We will not stop fighting and advocating for you,” Chew said in a video posted to X. “We will continue to do all we can including exercising our legal rights to protect this amazing platform that we have built with you.” He also asked TikTok users in the US to share their stories with friends, families, and senators. “This legislation, if passed into law, will lead to a ban of TikTok in the United States,” Chew said. “Even the bill’s sponsors admit that’s their goal.”

The bill, known as the “Protecting Americans from Foreign Adversary Controlled Applications Act” passed the House on Wednesday with bipartisan support just days after it was introduced. Should the bill pass into law, it would force TikTok’s parent company ByteDance, a Chinese corporation, to sell TikTok to a US company within six months, or be banned from US app stores and web hosting services. TikTok has challenged state-level bans in the past. Last year, TikTok sued Montana, which banned the app in the state. A federal judge temporarily blocked that ban in November before it went into effect.

Last week, TikTok sent push notifications to the app’s more than 170 million users in the US urging them to call their representatives about the potential ban. “Speak up now — before your government strips 170 million Americans of their Constitutional right to free expression,” the notification said. The wave of notifications reportedly led to House staffers being inundated with calls from high schoolers asking what a Congressman is. Lawmakers criticized the company they perceived as trying to “interfere” with the legislative process.

In his appeal, Chew said that banning TikTok would give “more power to a handful of other social media companies.” Former President Donald Trump, who once tried to force ByteDance to sell TikTok in the US, recently expressed a similar sentiment, claiming that banning TikTok would strengthen Meta whose platform, Reels, competes with TikTok directly. Chew also added that taking TikTok away would also hurt hundreds of thousands of American jobs, creators, and small businesses.

This article originally appeared on Engadget at https://www.engadget.com/tiktoks-ceo-urges-users-to-protect-your-constitutional-rights-as-us-ban-looms-002806276.html?src=rss

EU regulators pass the planet’s first sweeping AI regulations

The European Parliament has approved sweeping legislation to regulate artificial intelligence, nearly three years after the draft rules were first proposed. Officials reached an agreement on AI development in December. On Wednesday, members of the parliament approved the AI Act with 523 votes in favor and 46 against, There were 49 abstentions.

The EU says the regulations seek to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." The act defines obligations for AI applications based on potential risks and impact.

The legislation has not become law yet. It's still subject to lawyer-linguist checks, while the European Council needs to formally enforce it. But the AI Act is likely to come into force before the end of the legislature, ahead of the next parliamentary election in early June.

Most of the provisions will take effect 24 months after the AI Act becomes law, but bans on prohibited applications will apply after six months. The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases. Clearview AI's activity would fall under that category.

Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities." Some aspects of predictive policing will be prohibited i.e. when it's based entirely on assessing someone's characteristics (such as inferring their sexual orientation or political opinions) or profiling them. Although the AI Act by and large bans law enforcement's use of biometric identification systems, it will be allowed in certain circumstances with prior authorization, such as to help find a missing person or prevent a terrorist attack.

Applications that are deemed high-risk — including the use of AI in law enforcement and healthcare— are subject to certain conditions. They must not discriminate and they need to abide by privacy rules. Developers have to show that the systems are transparent, safe and explainable to users too. As for AI systems that the EU deems low-risk (like spam filters), developers still have to inform users that they're interacting with AI-generated content.

The law has some rules when it comes to generative AI and manipulated media too. Deepfakes and any other AI-generated images, videos and audio will need to be clearly labeled. AI models will have to respect copyright laws too. "Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research," the text of the AI Act reads. "Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works." However, AI models built purely for research, development and prototyping are exempt.

The most powerful general-purpose and generative AI models (those trained using a total computing power of more than 10^25 FLOPs) are deemed to have systemic risks under the rules. The threshold may be adjusted over time, but OpenAI's GPT-4 and DeepMind's Gemini are believed to fall into this category. 

The providers of such models will have to assess and mitigate risks, report serious incidents, provide details of their systems' energy consumption, ensure they meet cybersecurity standards and carry out state-of-the-art tests and model evaluations.

As with other EU regulations targeting tech, the penalties for violating the AI Act's provisions can be steep. Companies that break the rules will be subject to fines of up to €35 million ($51.6 million) or up to seven percent of their global annual profits, whichever is higher. 

The AI Act applies to any model operating in the EU, so US-based AI providers will need to abide by them, at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe were the AI Act to become law, but later said the company had no plans to do so.

To enforce the law, each member country will create its own AI watchdog and the European Commission will set up an AI Office. This will develop methods to evaluate models and monitor risks in general-purpose models. Providers of general-purpose models that are deemed to carry systemic risks will be asked to work with the office to draw up codes of conduct. 

This article originally appeared on Engadget at https://www.engadget.com/eu-regulators-pass-the-planets-first-sweeping-ai-regulations-190654561.html?src=rss

EU regulators pass the planet’s first sweeping AI regulations

The European Parliament has approved sweeping legislation to regulate artificial intelligence, nearly three years after the draft rules were first proposed. Officials reached an agreement on AI development in December. On Wednesday, members of the parliament approved the AI Act with 523 votes in favor and 46 against, There were 49 abstentions.

The EU says the regulations seek to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." The act defines obligations for AI applications based on potential risks and impact.

The legislation has not become law yet. It's still subject to lawyer-linguist checks, while the European Council needs to formally enforce it. But the AI Act is likely to come into force before the end of the legislature, ahead of the next parliamentary election in early June.

Most of the provisions will take effect 24 months after the AI Act becomes law, but bans on prohibited applications will apply after six months. The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases. Clearview AI's activity would fall under that category.

Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities." Some aspects of predictive policing will be prohibited i.e. when it's based entirely on assessing someone's characteristics (such as inferring their sexual orientation or political opinions) or profiling them. Although the AI Act by and large bans law enforcement's use of biometric identification systems, it will be allowed in certain circumstances with prior authorization, such as to help find a missing person or prevent a terrorist attack.

Applications that are deemed high-risk — including the use of AI in law enforcement and healthcare— are subject to certain conditions. They must not discriminate and they need to abide by privacy rules. Developers have to show that the systems are transparent, safe and explainable to users too. As for AI systems that the EU deems low-risk (like spam filters), developers still have to inform users that they're interacting with AI-generated content.

The law has some rules when it comes to generative AI and manipulated media too. Deepfakes and any other AI-generated images, videos and audio will need to be clearly labeled. AI models will have to respect copyright laws too. "Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research," the text of the AI Act reads. "Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works." However, AI models built purely for research, development and prototyping are exempt.

The most powerful general-purpose and generative AI models (those trained using a total computing power of more than 10^25 FLOPs) are deemed to have systemic risks under the rules. The threshold may be adjusted over time, but OpenAI's GPT-4 and DeepMind's Gemini are believed to fall into this category. 

The providers of such models will have to assess and mitigate risks, report serious incidents, provide details of their systems' energy consumption, ensure they meet cybersecurity standards and carry out state-of-the-art tests and model evaluations.

As with other EU regulations targeting tech, the penalties for violating the AI Act's provisions can be steep. Companies that break the rules will be subject to fines of up to €35 million ($51.6 million) or up to seven percent of their global annual profits, whichever is higher. 

The AI Act applies to any model operating in the EU, so US-based AI providers will need to abide by them, at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe were the AI Act to become law, but later said the company had no plans to do so.

To enforce the law, each member country will create its own AI watchdog and the European Commission will set up an AI Office. This will develop methods to evaluate models and monitor risks in general-purpose models. Providers of general-purpose models that are deemed to carry systemic risks will be asked to work with the office to draw up codes of conduct. 

This article originally appeared on Engadget at https://www.engadget.com/eu-regulators-pass-the-planets-first-sweeping-ai-regulations-190654561.html?src=rss

House passes bill that could ban TikTok

A bill that could force a sale or outright ban on TikTok passed the House just days after it was first introduced. The House of Representatives approved the measure Wednesday, in a vote of 352 - 65, in a rare showing of bipartisan support. It now goes to the Senate.

If passed into law, the legislation would give parent company ByteDance a six-month window to sell TikTok or face a ban from US app stores and web hosting services. While the “Protecting Americans from Foreign Adversary Controlled Applications Act” is far from the first effort to force a ban or sale of TikTok, it’s been able to draw more support far more quickly than previous bills.

The measure cleared its first procedural vote in the House last week, just two days after it was introduced. The bill will now move onto the Senate, where its future is less certain. Senator Rand Paul has said he would block the bill, while other lawmakers have also been hesitant to publicly back the bill.

TikTok has called the bill unconstitutional and said it would hurt creators and businesses that rely on the service. "This process was secret and the bill was jammed through for one reason: it's a ban," a TikTok spokesperson said in a statement following the House vote. "We are hopeful that the Senate will consider the facts, listen to their constituents, and realize the impact on the economy, 7 million small businesses, and the 170 million Americans who use our service."

Last week, the company sent a wave of push notifications to users, urging them to ask their representatives to oppose the bill. Congressional staffers reported that offices were overwhelmed with calls, many of which came from confused teenagers. Lawmakers later accused the company of trying to “interfere” with the legislative process.

Free speech and digital rights groups also oppose the bill, with many noting that comprehensive privacy laws would be more effective at protecting Americans’ user data rather than a measure that primarily targets one app. Former President Donald Trump, who once also tried to force ByteDance to sell TikTok, has also said he is against the bill, claiming it would strengthen Meta.

In a letter to lawmakers, the Electronic Frontier Foundation (EFF), American Civil Liberties Union (ACLU), Fight for the Future and the Center for Democracy and Technology argued that the bill would “set an alarming global precedent for excessive government control over social media platforms” and would likely “invite copycat measures by other countries … with significant consequences for free expression globally.”

If the bill were to muster enough votes to pass the Senate, President Joe Biden says he would sign the bill into law. His administration has previously pressured ByteDance to sell TikTok. Officials maintain the app poses a national security risk due to its ties to ByteDance, a Chinese company. TikTok has repeatedly refuted these claims.

If the law was passed, the company would likely mount a legal challenge like it did in Montana, which passed a statewide ban last year. A federal judge temporarily blocked the ban in November before it could go into effect.

Update March 13, 2024, 12:32PM ET: This story has been updated to add a statement from a TikTok spokesperson.

This article originally appeared on Engadget at https://www.engadget.com/house-passes-bill-that-could-ban-tiktok-144805114.html?src=rss