The White House has announced an investigation into cars built in China and other unnamed "countries of concern." The Biden administration notes that cars are "constantly connecting" with drivers' phones, other vehicles, American infrastructure and their manufacturers, and that newer models use tech such as driver assist systems.
"Connected vehicles collect large amounts of sensitive data on their drivers and passengers; regularly use their cameras and sensors to record detailed information on US infrastructure; interact directly with critical infrastructure; and can be piloted or disabled remotely," the White House said in a statement. Officials are concerned that "new vulnerabilities and threats" could arise from connected vehicles if foreign governments are able to access data from them. They are especially wary that said countries of concern could use such information in ways that put national security at risk.
The Department of Commerce will lead the investigation. "We need to understand the extent of the technology in these cars that can capture wide swaths of data or remotely disable or manipulate connected vehicles, so we are soliciting information to determine whether to take action under our ICTS [information and communications technology and services] authorities," Commerce Secretary Gina Raimondo said.
Through its advance notice of proposed rulemaking [PDF], the agency is looking for feedback from the public to help determine "the technologies and market participants that may be most appropriate for regulation." The investigation will help the Commerce Department decide whether to take action. It's the first time that the agency's Bureau of Industry and Security is carrying out an investigation under Trump-era Executive Orders "focused on protecting domestic information and communications technology and services supply chains from national security threats," the White House said.
"China is determined to dominate the future of the auto market, including by using unfair practices. China’s policies could flood our market with its vehicles, posing risks to our national security. I’m not going to let that happen on my watch," President Joe Biden said. "Connected vehicles from China could collect sensitive data about our citizens and our infrastructure and send this data back to the People’s Republic of China. These vehicles could be remotely accessed or disabled."
As The Washington Post points out, cars built in China aren't especially common on US roads as yet, but they're becoming an increasingly familiar sight in other markets, such as Europe. While many of the vehicles that are causing concerns are EVs, its cars' cameras, sensors and software that are the focus of the probe.
It's not the first time that the US has investigated Chinese companies over concerns that they pose security risks to the country's infrastructure. A few years ago, it banned the import and sale of telecom networking equipment made by Huawei and ZTE (after stopping government employees from using the companies' phones). The government also required telecoms to remove and replace Huawei and ZTE gear in existing infrastructure at great expense.
This article originally appeared on Engadget at https://www.engadget.com/the-us-will-investigate-cars-built-in-china-over-security-concerns-155037465.html?src=rss
The White House has announced an investigation into cars built in China and other unnamed "countries of concern." The Biden administration notes that cars are "constantly connecting" with drivers' phones, other vehicles, American infrastructure and their manufacturers, and that newer models use tech such as driver assist systems.
"Connected vehicles collect large amounts of sensitive data on their drivers and passengers; regularly use their cameras and sensors to record detailed information on US infrastructure; interact directly with critical infrastructure; and can be piloted or disabled remotely," the White House said in a statement. Officials are concerned that "new vulnerabilities and threats" could arise from connected vehicles if foreign governments are able to access data from them. They are especially wary that said countries of concern could use such information in ways that put national security at risk.
The Department of Commerce will lead the investigation. "We need to understand the extent of the technology in these cars that can capture wide swaths of data or remotely disable or manipulate connected vehicles, so we are soliciting information to determine whether to take action under our ICTS [information and communications technology and services] authorities," Commerce Secretary Gina Raimondo said.
Through its advance notice of proposed rulemaking [PDF], the agency is looking for feedback from the public to help determine "the technologies and market participants that may be most appropriate for regulation." The investigation will help the Commerce Department decide whether to take action. It's the first time that the agency's Bureau of Industry and Security is carrying out an investigation under Trump-era Executive Orders "focused on protecting domestic information and communications technology and services supply chains from national security threats," the White House said.
"China is determined to dominate the future of the auto market, including by using unfair practices. China’s policies could flood our market with its vehicles, posing risks to our national security. I’m not going to let that happen on my watch," President Joe Biden said. "Connected vehicles from China could collect sensitive data about our citizens and our infrastructure and send this data back to the People’s Republic of China. These vehicles could be remotely accessed or disabled."
As The Washington Post points out, cars built in China aren't especially common on US roads as yet, but they're becoming an increasingly familiar sight in other markets, such as Europe. While many of the vehicles that are causing concerns are EVs, its cars' cameras, sensors and software that are the focus of the probe.
It's not the first time that the US has investigated Chinese companies over concerns that they pose security risks to the country's infrastructure. A few years ago, it banned the import and sale of telecom networking equipment made by Huawei and ZTE (after stopping government employees from using the companies' phones). The government also required telecoms to remove and replace Huawei and ZTE gear in existing infrastructure at great expense.
This article originally appeared on Engadget at https://www.engadget.com/the-us-will-investigate-cars-built-in-china-over-security-concerns-155037465.html?src=rss
The two primary fears around AI are that the information these systems produce is gibberish, and that it'll unjustly take jobs away from people who won't make such sloppy mistakes. But the UK's current government is actively promoting the use of AI to do the work normally done by civil servants, including drafting responses to parliamentary inquiries, the Financial Times reports.
UK Deputy Prime Minister Oliver Dowden is set to unveil a "red box" tool that can allegedly absorb and summarize information from reputable sources, like the parliamentary record. A separate instrument is also being trialed that should work similarly but with individual responses to public consultations. While it's unclear how quickly the AI tool can perform this work, Dowden claims it takes three months with 25 civil servants. However, the drafts would allegedly always be double-checked by a human and include sourcing.
The Telegraph quoted Dowden arguing that implementing AI technology is critical to cutting civil service jobs — something he wants to do. "It really is the only way, I think, if we want to get on a sustainable path to headcount reduction. Remember how much the size of the Civil Service has grown as a result of the pandemic and, and EU exit preparedness. We need to really embrace this stuff to drive the numbers down." Dowden's statement aligns with hopes from his boss, Prime Minister Rishi Sunak, to use technology to increase government productivity — shockingly, neither person has offered to save money by giving AI their job.
Dowden does show some restraint against having AI do everything. In a pre-speech briefing, he noted that the government wouldn't use AI for any "novel or contentious or highly politically sensitive areas." At the same time, the Cabinet Office's AI division is set to grow from 30 to 70 employees and to get a new budget of £110 million ($139.1 million), up from £5 million ($6.3 million).
This article originally appeared on Engadget at https://www.engadget.com/uk-government-wants-to-use-ai-to-cut-civil-service-jobs-140031159.html?src=rss
The two primary fears around AI are that the information these systems produce is gibberish, and that it'll unjustly take jobs away from people who won't make such sloppy mistakes. But the UK's current government is actively promoting the use of AI to do the work normally done by civil servants, including drafting responses to parliamentary inquiries, the Financial Times reports.
UK Deputy Prime Minister Oliver Dowden is set to unveil a "red box" tool that can allegedly absorb and summarize information from reputable sources, like the parliamentary record. A separate instrument is also being trialed that should work similarly but with individual responses to public consultations. While it's unclear how quickly the AI tool can perform this work, Dowden claims it takes three months with 25 civil servants. However, the drafts would allegedly always be double-checked by a human and include sourcing.
The Telegraph quoted Dowden arguing that implementing AI technology is critical to cutting civil service jobs — something he wants to do. "It really is the only way, I think, if we want to get on a sustainable path to headcount reduction. Remember how much the size of the Civil Service has grown as a result of the pandemic and, and EU exit preparedness. We need to really embrace this stuff to drive the numbers down." Dowden's statement aligns with hopes from his boss, Prime Minister Rishi Sunak, to use technology to increase government productivity — shockingly, neither person has offered to save money by giving AI their job.
Dowden does show some restraint against having AI do everything. In a pre-speech briefing, he noted that the government wouldn't use AI for any "novel or contentious or highly politically sensitive areas." At the same time, the Cabinet Office's AI division is set to grow from 30 to 70 employees and to get a new budget of £110 million ($139.1 million), up from £5 million ($6.3 million).
This article originally appeared on Engadget at https://www.engadget.com/uk-government-wants-to-use-ai-to-cut-civil-service-jobs-140031159.html?src=rss
President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.
During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”
Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.
Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly. There are likely to be additional restrictions on companies’ ability to sell data as part of cloud service contracts, investment agreements and employment agreements.
Though the White House described the step as “the most significant executive action any President has ever taken to protect Americans’ data security,” it’s unclear how exactly enforcement of the new policies will be handled within the Justice Department. A DoJ official said the executive order would require due diligence from data brokers to vet who they are dealing with, similar to the way companies are expected to adhere to US sanctions.
As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.
Update February 28, 2024, 3:00 PM ET: This article was modified to clarify that, while the White House says the order will be issued today, it is unclear whether it has been issued at time of writing.
This article originally appeared on Engadget at https://www.engadget.com/biden-signs-executive-order-to-stop-russia-and-china-from-buying-americans-personal-data-100029820.html?src=rss
President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.
During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”
Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.
Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly. There are likely to be additional restrictions on companies’ ability to sell data as part of cloud service contracts, investment agreements and employment agreements.
Though the White House described the step as “the most significant executive action any President has ever taken to protect Americans’ data security,” it’s unclear how exactly enforcement of the new policies will be handled within the Justice Department. A DoJ official said the executive order would require due diligence from data brokers to vet who they are dealing with, similar to the way companies are expected to adhere to US sanctions.
As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.
Update February 28, 2024, 3:00 PM ET: This article was modified to clarify that, while the White House says the order will be issued today, it is unclear whether it has been issued at time of writing.
This article originally appeared on Engadget at https://www.engadget.com/biden-signs-executive-order-to-stop-russia-and-china-from-buying-americans-personal-data-100029820.html?src=rss
The US military has ramped up its use of artificial intelligence tools after the October 7 Hamas attacks on Israel, based on a new report by Bloomberg. Schuyler Moore, US Central Command's chief technology officer, told the news organization that machine learning algorithms helped the Pentagon identify targets for more than 85 air strikes in the Middle East this month.
US bombers and fighter aircraft carried out those air strikes against seven facilities in Iraq and Syria on February 2, fully destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers. The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it had then destroyed through multiple air strikes in the same month.
The machine learning algorithms used to narrow down targets were developed under Project Maven, Google's now-defunct partnership the Pentagon. To be precise, the project entailed the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag images for further human review. It caused an uproar among Google employees: Thousands had petitioned the company to end its partnership with Pentagon, and some even quit over its involvement altogether. A few months after that employee protest, Google decided not to renew its contract, which had ended in 2019.
Moore told Bloomberg that US forces in the Middle East haven't stopped experimenting with the use of algorithms to identify potential targets using drone or satellite imagery even after Google ended its involvement. The military has been testing out their use over the past year in digital exercises, she said, but it started using targeting algorithms in actual operations after the October 7 Hamas attacks. She clarified, however, that human workers constantly checked and verified the AI systems' target recommendations. Human personnel were also the ones who proposed how to stage the attacks and which weapons to use. "There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."
This article originally appeared on Engadget at https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html?src=rss
The US military has ramped up its use of artificial intelligence tools after the October 7 Hamas attacks on Israel, based on a new report by Bloomberg. Schuyler Moore, US Central Command's chief technology officer, told the news organization that machine learning algorithms helped the Pentagon identify targets for more than 85 air strikes in the Middle East this month.
US bombers and fighter aircraft carried out those air strikes against seven facilities in Iraq and Syria on February 2, fully destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers. The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it had then destroyed through multiple air strikes in the same month.
The machine learning algorithms used to narrow down targets were developed under Project Maven, Google's now-defunct partnership the Pentagon. To be precise, the project entailed the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag images for further human review. It caused an uproar among Google employees: Thousands had petitioned the company to end its partnership with Pentagon, and some even quit over its involvement altogether. A few months after that employee protest, Google decided not to renew its contract, which had ended in 2019.
Moore told Bloomberg that US forces in the Middle East haven't stopped experimenting with the use of algorithms to identify potential targets using drone or satellite imagery even after Google ended its involvement. The military has been testing out their use over the past year in digital exercises, she said, but it started using targeting algorithms in actual operations after the October 7 Hamas attacks. She clarified, however, that human workers constantly checked and verified the AI systems' target recommendations. Human personnel were also the ones who proposed how to stage the attacks and which weapons to use. "There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."
This article originally appeared on Engadget at https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html?src=rss
X, formerly Twitter, is once again restricting content in India. The company's Global Government Affairs account announced that the Indian government had issued an executive order mandating that X withhold specific accounts and posts or face penalties such as "significant fines and imprisonment." X further stated that it doesn't agree with the order and is challenging it.
The designated posts and accounts will only be blocked within India, however, there's no clear list of those affected. "Due to legal restrictions, we are unable to publish the executive orders, but we believe that making them public is essential for transparency," the Global Government Affairs post stated. "This lack of disclosure can lead to a lack of accountability and arbitrary decision-making." X claims to have notified all affected parties.
The posts likely center around the ongoing farmers' protest, which, since February 13, has seen multiple farmers' unions on strike in a bid to get floor pricing, or a minimum support price, for crops sold. Violent clashes between protesters and police have already resulted in at least one death, AP News reports. Mohammed Zubair, an Indian journalist and co-founder of Alt News, shared purported screenshots of suspended accounts belonging to individuals critical of the current government, on-the-ground reporters, prominent farm unionists, and more.
This forced blocking is far from the first incident between X and India. In 2022, X sued the Indian government for "arbitrarily and disproportionately" applying its IT laws passed the year prior. The law required the company to hire a point of contact for the local authorities and a domestic compliance officer. Prior to this concession, in early 2021, the Indian government had threatened to jail X's employees if posts about the then occurring farmers' protest stayed live on the site. Shortly after, the country mandated that X remove content criticizing its COVID-19 response.
India dismissed X's suit in June 2023, claiming the company didn't properly explain why it had ever delayed complying with the country's IT laws. The court also fined X 5 million rupees ($60,300), stating, "You are not a farmer but a billon dollar company." The order followed shortly after Twitter co-founder Jack Dorsey claimed that India had threatened to raid employees' homes and shut down the site if the company hadn't taken down posts during the farmers' protest.
This article originally appeared on Engadget at https://www.engadget.com/indias-government-is-forcing-x-to-censor-accounts-via-executive-order-amid-the-farmers-protest-112617420.html?src=rss
A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.
The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).
The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:
Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content
Seeking to detect the distribution of this content on their platforms
Seeking to appropriately address this content detected on their platforms
Fostering cross-industry resilience to deceptive AI election content
Providing transparency to the public regarding how the company addresses it
Continuing to engage with a diverse set of global civil society organizations, academics
Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”
The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images
OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.
“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”
Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.
Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.
Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.
“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, toldThe Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images
In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.
The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.
“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”
This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss