A number of US government agencies are backing a potential move by the Commerce Department to ban TP-Link routers, according to The Washington Post. Multiple sources familiar with internal deliberations spoke with the publication on the condition of anonymity, including a former senior Defense Department official.
A months-long interagency process involving the Departments of Homeland Security, Justice and Defense took place this summer to consider the sweeping move. Investigations into the company stemming from national security concerns have been taking place since at least last year.
At the heart of the potential ban is a concern that TP-Link retains ties to China, despite splitting from Chinese corporation TP-Link Technologies to become a standalone entity in 2022. A spokesperson for TP-Link denied any Chinese ties, saying "any adverse action against TP-Link would have no impact on China, but would harm an American company."
US officials told The Washington Post they are concerned because under Chinese law, TP-Link must comply with Chinese intelligence agency requests and may even be pressured to push malicious software updates to its devices. US-based TP-Link Systems said the company is “not subject to the direction of the PRC intel apparatus.”
TP-Link routers are among the most popular in the United States, with the company claiming 36 percent of US market share. Earlier this year however, former American cybersecurity official Rob Joyce testified before Congress that TP-Link’s market share was roughly 60 percent, thanks in part to selling its equipment below cost in order to drive out competition.
The potential ban of TP-Link products is another in a long list of bureaucratic moves or discussions that have come against the backdrop of trade negotiations between the US and China. While a potential breakthrough in these talks was achieved today, a source for The Washington Post said a ban on TP-Link products remains a bargaining chip for the administration.
This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/us-government-is-getting-closer-to-banning-tp-link-routers-145528317.html?src=rss
Donald Trump and China’s leader, Xi Jinping, have agreed to a one-year pause on the punitive Trump-instated tariffs that are at the heart of the ongoing trade war between the two superpowers. Among the issues discussed when the two leaders met face-to-face in the South Korean city of Busan were China’s chokehold on rare earth metals and the export restrictions on NVIDIA’s AI chips.
Trump had previously made some characteristically explosive threats that he would impose new 100 percent tariffs on imports from China as a retaliation to Xi’s tightening grip on rare earths, the processing of which is almost entirely controlled by China. These materials are essential for manufacturing everything from smartphones and EVs to military equipment. As part of the (for now) temporary truce, China reportedly agreed to pause the new measures for the next 12 months in exchange for Trump lowering Chinese tariffs by 10 percent.
According to The New York Times, Trump said he had discussed semiconductors during his talks with Xi, and “did not rule out” the possibility of allowing NVIDIA to sell AI chips to China. The American company was allowed to resume selling its H20 chips in China in July after an initial ban earlier in the year, only for Beijing to reportedly respond by instructing its largest tech companies not to do business until a national security review had been completed. The leaders did not discuss the possible availability of Blackwell chips — NVIDIA’s most advanced AI chip to date that is currently in development and possibly a motivating factor in China’s apparent indifference to the H20 architecture — at their meeting in South Korea.
There was also no resolution on TikTok and its future in the US. The last we heard, the Trump administration claimed to be close to agreement that would see the US gain majority ownership of the Chinese-owned social media giant where it was operating on home soil, but nothing has been finalized at the time of writing.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/us---china-trade-talks-see-one-year-pause-on-punitive-tariffs-140550358.html?src=rss
Australia is set to ban under 16s from social media services after the Senate passed a bill to that effect by 34 votes to 19. The legislation will return to the House of Representatives, which will need to approve amendments before it becomes law. That is all but a formality as the government holds a majority in that chamber. The bill, which has been fast-tracked, sailed through the lower house in a 102-13 vote earlier this week.
The government has said that the likes of Snapchat, TikTok, Instagram and X will be subject to the new rules, which won’t come into force for at least 12 months. However, officials still have to confirm which platforms the ban actually covers as they aren't detailed in the bill. The BBC notes that the country’s communications commissioner, Michelle Rowland, will determine that with help from a so-called eSafety Commissioner. The latter will be responsible for enforcing the law.
The rules will not apply to health and education services, gaming platforms or messaging apps, nor those that don’t require an account. So, the likes of Fortnite, Roblox and YouTube are likely to avoid any ban.
Companies that are subject to the legislation could face fines of up to $49.5 million AUD ($32.1 million) if they fail to comply. They will have to employ age-verification tech, though the specifics of that have yet to be determined. The government plans to assess various options in the coming months, but Rowland confirmed this week that platforms won't be able to compel users to submit a personal document (such as passport or driver license) to verify their age.
Researchers have claimed that mooted age-verification systems may not work in practice. Critics, meanwhile, have raised concerns over privacy protections.
While there are certainly valid concerns about the harms of social media, such platforms can be a lifeline for younger people when they’re used responsibly. They can help vulnerable kids find resources and peers they can turn to for advice. Social media can also help those in rural areas forge authentic social connections with others who live elsewhere.
Under 16s who continue to access banned platforms won’t be punished. Resourceful teens may find it very easy to bypass restrictions using a VPN, which could make the law largely toothless. The online world also extends far beyond the reach of a small number of centralized social media platforms. There are other pockets of the internet that teens can turn to instead. For instance, there are still a large number of active forums for various interests.
When the legislation becomes law, Australia will set the highest minimum age for social media of any jurisdiction. France has tabled legislation to block users under 15 from social media without parental consent and it’s now pushing for the European Union to move forward with a similar undertaking across the entire bloc. Norway plans to bring in legislation along those lines, while the UK's technology secretary recently indicated that it was an option for that country.
Utah last year passed laws to limit minors' social media use. The state's governor repealed and replaced those earlier this year following legal challenges. However, in September, a judge blocked the most recent legislation just days before it was set to take effect. Other states have considered similar laws.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/australia-is-one-step-away-from-banning-social-media-for-under-16s-160454882.html?src=rss
Australia’s majority party has introduced a bill in Parliament that would ban children under 16 from social media. The legislation, which would put the onus on social platforms rather than children or parents, could fine infringing companies up to AUD$49.5 million ($32.2 million).
The Labor Party’s bill would apply to (among others) Snapchat, TikTok, Instagram and X. It would require platforms to cordon off and destroy any underage user data collected. However, the legislation would include exceptions for health and education services, like Headspace, Google Classroom and YouTube.
“For too many young Australians, social media can be harmful. Almost two-thirds of 14- to 17-year-old Australians have viewed extremely harmful content online, including drug abuse, suicide or self-harm, as well as violent material,” Australia Communications Minister Michelle Rowland told Parliament on Thursday. “A quarter have been exposed to content promoting unsafe eating habits.”
Reutersnotes that the law would be one of the most aggressive globally in tackling the problems related to children’s social media use. It wouldn’t include exemptions for parental consent or pre-existing accounts. Essentially, social platforms would have to police their platforms to ensure no child under 16 can use their services.
The bill is supported by the majority (center-left) Labor Party and opposition (right) Liberal Party. “This is a landmark reform,” Australian Prime Minister Anthony Albanese said. “We know some kids will find workarounds, but we’re sending a message to social media companies to clean up their act.”
The (left) Australian Greens have criticized the legislation, saying it ignores expert evidence in “ramming” the law through Parliament without proper scrutiny. “The recent Parliamentary Inquiry into Social Media heard time and time again that an age-ban will not make social media safer for anyone,” Senator Sarah Hanson-Young said in a statement. “[The bill] is complicated to implement and will have unintended consequences for young people.”
Last year, US Surgeon General Vivek Murthy sounded the alarm about the risks of underage social media use. “Children and adolescents who spend more than 3 hours a day on social media face double the risk of mental health problems including experiencing symptoms of depression and anxiety,” the 2023 advisory from the Surgeon General’s office read.
The US requires tech companies to seek parental consent to access the data of children under 13, but it doesn’t have any age restrictions. Reuters notes that France enacted a social media ban for children under 15 last year, but it allows children to still access the services with parental consent.
This article originally appeared on Engadget at https://www.engadget.com/social-media/australia-introduces-a-bill-that-would-ban-children-under-16-from-social-media-174547712.html?src=rss
The UK government is expected to launch a parliamentary inquiry into the roll of social media in summer riots, particularly around the use of generative AI, The Guardian reported. As part of that, MPs (members of Parliament) wish to cross-examine X owner Elon Musk, along with senior executives from Meta and TikTok, as part of a Commons science and technology select committee social media inquiry.
"[Musk] has very strong views on multiple aspects of this," said Labour chair of the select committee, Chi Onwurah. "I would certainly like the opportunity to cross-examine him to see … how he reconciles his promotion of freedom of expression with his promotion of pure disinformation. [The committee will] get to the bottom of the links between social media algorithms, generative AI, and the spread of harmful or false content."
The government is looking into the use of fake images created by generative AI, often containing Islamophobic content, which were widely shared in social media posts on Facebook and X. Such posts may have inflamed riots last August that took place after three schoolgirls were murdered. MPs are also looking into big tech business models that "encourage the spread of content that can mislead and harm."
Musk, who may soon have a large role in the US government under incoming president Trump, has criticized the UK government and isn't likely to attend. During the riots in August he said: “Civil war is inevitable," and on Monday stated that "Britain is going full Stalin."
In December, UK regulator Ofcom will publish new rules as part of the Online Safety Act. With the new regulations, it's likely that social media platforms will be forced prevent the spread of illegal materials such as CSAM and survey activities that could stir up violence. Companies like X and Facebook will then be required to remove any illegal material.
This article originally appeared on Engadget at https://www.engadget.com/social-media/uk-government-will-summon-elon-musk-as-part-of-social-media-inquiry-130004409.html?src=rss
Two undersea communications cables in the Baltic Sea have been knocked offline, and at least one appears to have been physically cut. CNNreceived confirmation from a local telecom company that a cable between Lithuania and Sweden was cut on Sunday morning. A second cable, about 60 to 65 miles from the first, routes communications between Finland and Germany. The cause of that outage has yet to be determined, but officials suspect “intentional damage.”
The outages follow a September warning from the US about an increased risk of Russian “sabotage” of undersea cables. That came after a joint investigation from public broadcasters from Sweden, Denmark, Norway and Finland that Russia had deployed a fleet of spy ships in Nordic waters. They were reportedly part of a program designed to sabotage the cables (and wind farms).
This doesn’t leave the European nations entirely without online communications, as data is typically routed through multiple cables to avoid overreliance on a single one.
Cinia, the state-controlled Finnish company that oversees the second cable, said it wasn’t yet determined what caused the outage since they haven’t yet physically inspected it. However, the sudden outage reportedly suggests it, too, was cut by an outside force.
The foreign ministers of Finland and Germany released a joint statement on Monday. “We are deeply concerned about the severed undersea cable connecting Finland and Germany in the Baltic Sea,” they wrote. “The fact that such an incident immediately raises suspicions of intentional damage speaks volumes about the volatility of our times. A thorough investigation is underway. Our European security is not only under threat from Russia‘s war of aggression against Ukraine, but also from hybrid warfare by malicious actors. Safeguarding our shared critical infrastructure is vital to our security and the resilience of our societies.”
The Lithuania-Sweden cable, which handles about a third of Lithuania’s internet capacity, is expected to be repaired “over the next few weeks,” and weather could determine the precise timing.
This article originally appeared on Engadget at https://www.engadget.com/computing/two-baltic-sea-communications-cables-have-been-knocked-offline-214130723.html?src=rss
President-elect Donald Trump has named Brendan Carr as the new chairman of the Federal Communications Commission, The New York Times reported. Carr has previously argued in favor of punishing TV networks for political bias and regulating big tech firms like Google and Apple. The appointment doesn't require the usual senate approval, since Carr has sat on the commission since 2017.
Under a Trump administration, the FCC will have two Democrat and three Republican commissioners. Carr will take over from current FCC chair Jessica Rosenworcel.
Carr wrote the FCC section on the infamous Project 2025 document, proposing new social media restrictions that could benefit conservative viewpoints. He also wants to limit the Section 230 legal shield that allows social media and other platforms to host and moderate comments and other user-generated content.
"The censorship cartel must be dismantled," Carr wrote last week on X. He added that the FCC under his leadership will also go after TV networks. " Broadcast media have had the privilege of using a scarce and valuable public resource — our airwaves. When the transition is complete, the FCC will enforce this public interest obligation."
However, Carr won't have full powers to enact new rules. Since companies like Google and Meta aren't considered communications services, the FCC would have limited power to regulate them. That means an expansion of its powers would require new legislation. Brendan Carr has “proposed to do a lot of things he has no jurisdiction to do and in other cases he’s blatantly misreading the rules,” Free Press co-chief executive Jessica Gonzalez told the NYT.
That's not to say that Carr can't affect the way the internet operates. In 2017, he voted to repeal net neutrality rules, and in 2021, voted against restoring them.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/trump-names-commission-member-brendan-carr-as-fcc-chairman-130041732.html?src=rss
A damning report from the Anti-Defamation League published Thursday on the “unprecedented” amount of racist and violent content on Steam Community has prompted a US Senator to take action. In a letter spotted by The Verge, Senator Mark Warner (D-VA) asked Valve CEO Gabe Newell how he and his company are addressing the issue.
“My concern is elevated by the fact that Steam is the largest single online gaming digital distribution and social networking platform in the world with over 100 million unique user accounts and a user base similar in scale to that of the ‘traditional social media and social network platforms,’” Warner wrote.
The senator also cited Steam’s online conduct policy that states users may not “upload or post illegal or inappropriate content [including] [real] or disturbing depictions of violence” or “harass other users or Steam personnel.”
“Valve must bring its content moderation practices in line with industry standards or face more intense scrutiny from the federal government for its complicity in allowing hate groups to congregate and engage in activities that undoubtedly puts Americans at risk,” Warner writes.
Congress doesn’t have the ability to take action on Valve or any platform except to shine light on the problem through letters and committee hearings. The Supreme Court overturned two state laws in June that prevented government officials from communicating with social media companies about objectionable content.
This also isn’t the first time that Congress has raised concerns with Valve about extremist and racist content created by users or players in one of its products. The Senate Committee on the Judiciary sent a letter to Newell in 2023 to express concerns about players posting and spouting racist language in Valve’s multiplayer online arena game Dota 2.
We reached out to Valve for comment. We will update this story if we receive a statement or reactions from Valve.
This article originally appeared on Engadget at https://www.engadget.com/social-media/adls-report-on-racist-steam-community-posts-prompts-a-letter-from-virginia-senator-214243775.html?src=rss
Reporters Without Borders (RSF) said this week it’s pressing criminal charges against X (Twitter) in France related to a Kremlin disinformation campaign that used the nonprofit as a prop to spread fake news. The organization said legal means are its “last resort” in its fight against the bogus stories, designed to foster pro-Russia and anti-Ukraine sentiment, that festered on the platform. “X’s refusal to remove content that it knows is false and deceitful — as it was duly informed by RSF — makes it complicit in the spread of the disinformation circulating on its platform,” RSF director of advocacy Antoine Bernard said in a statement.
“These legal proceedings seek to remind X, a powerful social media company, and its executives that they can be held criminally responsible if they knowingly provide a platform and tools for disseminating false information, identity theft, misrepresentation, and defamation — offences punishable under the French Penal Code,” RSF attorney Emmanuel Daoud wrote.
RSF published an investigation in September detailing how a fabricated video was planted and spread by Russia on the Elon Musk-owned social platform. The fake clip was made to look like a BBC-produced one, including the news organization’s logo. It made the erroneous case that RSF conducted a study that revealed a large number of Ukrainian soldiers sympathizing with Nazism.
False claims that Ukraine is a pro-Nazi nation have been a common propaganda tactic used by Russia since its 2022 invasion. The narrative is designed to engender support for the Kremlin-initiated war, which is estimated to have killed a million or more Ukrainian people.
RSF’s investigation revealed that an account called “Patricia,” claiming to be a translator in France, planted the seed for the disinformation. However, the report discovered that the account’s profile picture was found on a Russian website featuring photos of blond women designed “to make avatars.”
RSF says that even the account’s name seemed to have been automatically generated by X. In addition, the organization says Grok, X’s AI chatbot with access to live data about the platform, claimed the account has “very strong opinions, often in support of Russia and Vladimir Putin, while severely criticizing Ukraine and its supporters in Europe.”
The investigation found the video then took off, spreading through a chain that included a pro-Kremlin Irish entrepreneur living in Russia, a Kremlin propagandist with a large following on Telegram and even Russian officials. It was also shared by “highly influential bloggers” known for unflinching support of Vladimir Putin.
“In this story, the Russian authorities have acted a bit like they were laundering dirty information,” an RSF representative said in a video about the investigation (translated from French) in September. “They took false information, they laundered it through official channels. And then, this piece of information that wasn’t actual information was reintroduced into public discourse to make it look credible.”
Russia’s bogus video was widely shared on X and Telegram. Reporters Without Borders says the clip’s viewership reached half a million combined views by September 13. To capture its frustration with the blow to its credibility, the nonprofit cited the quote (of unknown origin but often attributed to Mark Twain): “A lie can travel halfway around the world while the truth is still putting on its shoes.”
RSF says it filed 10 reports with X of illegal content through the social channel’s reporting system required by the EU’s Digital Services Act (DSA). “After a series of rejections from X and requests for additional information — which RSF provided — none of the reports resulted in the removal of the defamatory content targeting our organisation and its advocacy director,” RSF wrote.
In July, the US Justice Department said it uncovered and dismantled a Russian propaganda network using nearly 1,000 accounts to push pro-Kremlin posts on X. The DOJ claimed the accounts posed as Americans and were made using AI. In October, The Wall Street Journalreported that Elon Musk held multiple private calls with Vladimir Putin from 2022 into this year, describing the contacts as a “closely held secret in government.”
“X’s refusal to remove content that it knows is false and deceitful — as it was duly informed by RSF — makes it complicit in the spread of the disinformation circulating on its platform,” RSF director Bernard wrote in a statement. “X provides those who spread falsehoods and manipulate public opinion with a powerful arsenal of tools and unparalleled visibility, while granting the perpetrators total impunity. It’s time for X to be held accountable. Pressing criminal charges is the last resort against the disinformation and war propaganda that RSF has fallen victim to, which is proliferating on this ‘Muskian’ network.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/reporters-without-borders-says-its-pressing-charges-against-x-200005117.html?src=rss
Elon Musk’s X is taking the state of California to court over a new law that prevents the spread of AI-generated election misinformation. Bloomberg reports that X filed a lawsuit against AB 2655, also known as the Defending Democracy from Deepfake Deception Act of 2024, in a Sacramento federal court.
California Gov. Gavin Newsom signed the bill into law on September 17, creating accountability standards for using false political speech faked with AI programs close to an election. The legislation prevents the distribution of “materially deceptive audio or visual media of a candidate within 60 days of an election at which the candidate will appear on the ballet.”
X argues that the law will create more political speech censorship. The complaint says the First Amendment “includes tolerance for potentially false speech made in the context of such criticisms.”
Newsom signed AB 2655 into law as part of a large package of bills addressing concerns about the use of AI to create sexually explicit deepfakes and other deceptive material. The next day, a federal judge issued a preliminary injunction against the law and other bills from Newsom’s signing.
California has become one of the epicenters of debate over the use and implementation of AI. Concerns about the use of AI in film and television projects, among other issues, prompted SAG-AFTRA to go on strike in 2023. SAG eventually reached a deal that included AI protections for actors prohibiting studios from using their likeness without permission or proper compensation. The following year, the state of California passed AB 2602, a law that makes it illegal for studios, publishers and video game studios to use someone’s likeness without their permission.
This article originally appeared on Engadget at https://www.engadget.com/ai/x-sues-california-over-deceptive-ai-made-election-content-ban-185010406.html?src=rss