FCC votes to restore net neutrality protections

The Federal Communications Commission has voted to reinstate net neutrality protections that were jettisoned during the Trump administration. As expected, the vote fell across party lines with the three Democratic commissioners in favor and the two Republicans on the panel voting against the measure.

With net neutrality rules in place, broadband service is considered an essential communications resource under Title II of the Communications Act of 1934. That enables the FCC to regulate broadband internet in a similar way to water, power and phone services. That includes giving the agency oversight of outages and the security of broadband networks. Brendan Carr, one of the Republican commissioners, referred to the measure as an "unlawful power grab."  

Under net neutrality rules, internet service providers have to treat broadband usage in the same way. Users have to be provided with access to all content, websites and apps under the same speeds and conditions. ISPs can't block or prioritize certain content — they're not allowed to throttle access to specific sites or charge streaming services for faster service.

The FCC adopted net neutrality protections in 2015 during the Obama administration. But they were scrapped when President Donald Trump was in office. Back in 2021, President Joe Biden signed an executive order to bring back the Obama-era rules, but the FCC was unable to do so for quite some time. The commission was deadlocked with two Democratic votes and two Republican votes until Anna Gomez was sworn in as the third Democratic commissioner on the panel last September. The FCC then moved relatively quickly (at least in terms of the FCC's pace) to re-establish net neutrality protections.

The issue may not be entirely settled. There may still be legal challenges from the telecom industry. However, the FCC's vote in favor of net neutrality is a win for advocates of an open and equitable internet.

This article originally appeared on Engadget at https://www.engadget.com/fcc-votes-to-restore-net-neutrality-protections-161350168.html?src=rss

Google has delayed killing third-party cookies from Chrome (again)

Google keeps promising to phase out third-party cookies on Chrome but not actually doing it. The company vowed to deprecate cookies back in 2020, pushing the date back to 2023 and then 2024. We did get some traction earlier this year, when Google disabled cookies for one percent of Chrome users, but those efforts have stalled. Now, the company says it won’t happen until next year.

It’s easy to drag Google for this but it’s not entirely in the company’s hands. The tech giant is working closely with the UK’s Competition and Markets Authority (CMA) to ensure that any tools it implements to replace the cookie’s tracking and measurement capabilities aren’t anti-competitive. These tools are known collectively as the Privacy Sandbox and Google says it has to wait until the CMA has had “sufficient time to review” results from industry tests that’ll be provided by the end of June.

Google’s Privacy Sandbox has stirred up some controversy in recent years. The proposed tools have drawn complaints from adtech companies, publishers and ad agencies, on the grounds that they are difficult to operate, don’t adequately replace traditional cookies and give too much power to Google. To that end, the company said that it recognizes “ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers.” This is another reason given for the delay until next year.

The CMA isn’t the only regulatory agency giving the side-eye to the current iteration of these Privacy Sandbox tools. The UK-based Information Commissioner’s Office drafted a report that indicated these tools could be used by advertisers to identify consumers, as suggested by the Wall Street Journal.

Those in the ad industry want to see cookies given the heave-ho, despite complaints about Privacy Sandbox. Drew Stein, CEO of adtech data firm Audigent, told Engadget that it’s time for Google “to deliver on the promise of a better ecosystem” by implementing its plans to eliminate third-party cookies.

The CMA, on the other hand, has indicated a willingness to keep third-party cookies in play, particularly if Google’s solution does more harm than good. Craig Jenkins, the CMA’s director of digital markets, recently said the organization would delay implementation of Privacy Sandbox tools if “we’re not satisfied we can resolve the concerns”, as reported by Adweek. We’ll see what happens in 2025.

This article originally appeared on Engadget at https://www.engadget.com/google-has-delayed-killing-third-party-cookies-from-chrome-again-155911583.html?src=rss

EU’s new right-to-repair rules force companies to repair out-of-warranty devices

The European Union has adopted a right-to-repair directive that will make it easier for consumers to get their devices fixed. The new rules extend a product's guarantee if it breaks under warranty, while obliging manufacturers to repair devices no longer covered. The law still needs to be approved by member nations. 

Devices sold in Europe already offer minimum two-year warranties, but the new rules impose additional requirements. If a device is repaired under warranty, the customer must be given a choice between a replacement or a repair. If they choose the latter, the warranty is to be extended by a year. 

Once it expires, companies are still required to repair "common household products" that are repairable under EU law, like smartphones, TVs and certain appliances (the list of devices can be extended over time). Consumer may also borrow a device during the repair or, if it can't be fixed, opt for a refurbished unit as an alternative.

The EU says repairs must be offered at a "reasonable" price such that "consumers are not intentionally deterred" from them. Manufacturers need to supply spare parts and tools and not try to weasel out of repairs through the use of "contractual clauses, hardware or software techniques." The latter, while not stated, may make it harder for companies to sunset devices by halting future updates

In addition, manufacturers can't stop the use of second-hand, original, compatible or 3D-printed spare parts by independent repairers as long as they're in conformity with EU laws. They must provide a website that shows prices for repairs, can't refuse to fix a device previously repaired by someone else and can't refuse a repair for economic reasons.

While applauding the expanded rules, Europe's Right to Repair group said it there were missed opportunities. It would have liked to see more product categories included, priority for repair over replacement, the right for independent repairers to have access to all spare parts/repair information and more. "Our coalition will continue to push for ambitious repairability requirements... as well as working with members focused on the implementation of the directive in each member state."

Along with helping consumers save money, right-to-repair rules help reduce e-waste, CO2 pollution and more. The area is currently a battleground in the US as well, with legislation under debate in around half the states. California's right-to-repair law — going into effect on July 1 — forces manufacturers to stock replacement parts, tools and repair manuals for seven years for smartphones and other devices that cost over $100.

This article originally appeared on Engadget at https://www.engadget.com/eus-new-right-to-repair-rules-force-companies-to-repair-out-of-warranty-devices-081939123.html?src=rss

The world’s leading AI companies pledge to protect the safety of children online

Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech.

The pledges from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds.” The goal of the initiative is to prevent the creation of sexually explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, Thorn says. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement agencies that are already struggling to identify genuine victims.

On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” that outlines strategies and lays out recommendations for companies that build AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children.

One of the recommendations, for instance, asks companies to choose data sets used to train AI models carefully and avoid ones only only containing instances of CSAM but also adult sexual content altogether because of generative AI’s propensity to combine the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that let people “nudity” images of children, thus creating new AI-generated child sexual abuse material online. A flood of AI-generated CSAM, according to the paper, will make identifying genuine victims of child sexual abuse more difficult by increasing the “haystack problem” — an reference to the amount of content that law enforcement agencies must current sift through.

“This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science Rebecca Portnoff told the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.”

Some companies, Portnoff said, had already agreed to separate images, video and audio that involved children from data sets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method isn’t foolproof — watermarks and metadata can be easily removed.

This article originally appeared on Engadget at https://www.engadget.com/the-worlds-leading-ai-companies-pledge-to-protect-the-safety-of-children-online-213558797.html?src=rss

Proton Mail’s paid users will now get alerts if their info has been posted on the dark web

Proton Mail has introduced Dark Web Monitoring for its paid users, which will keep them informed of breaches or leaks they may have been affected by. If anything's been spotted on the dark web, the feature will send out alerts that include information like what service was compromised, what personal details the attackers got (e.g. passwords, name, etc.) and recommended next steps. At launch, you’ll have to visit the Proton Mail Security Center on the web or desktop to access these alerts, but the company says email and in-app notifications are on the way.

An example of a breach alert from Proton Mail
Proton

Dark Web Monitoring is intended to be a proactive security measure. If you’ve used your Proton Mail email address to sign up for a third-party service, like a social media site, and then hackers steal user data from that service, it would let you know in a timely manner if your credentials have been compromised so you can take action (hopefully) before any harm is done. It seems a fitting move for the service, which already offers end-to-end encryption and has made privacy its main stance since the beginning. Dark Web Monitoring won’t be available to free users, though.

“While data breaches of third-party sites leading to the leak of personal information (such as your email address) can never be entirely avoided, automated early warning can help users stay vigilant and mitigate worse side effects such as identity theft,” said Eamonn Maguire, Head of Anti-Abuse and Account Security at Proton.

This article originally appeared on Engadget at https://www.engadget.com/proton-mails-paid-users-will-now-get-alerts-if-their-info-has-been-posted-on-the-dark-web-100057504.html?src=rss

EU criticizes Meta’s ‘privacy for cash’ business model

The European Union doesn't think you should have to choose between giving Meta and other major players your data or your money. In a statement, the European Data Protection Board (EDPB) stated that "consent or pay" models often don't "comply with the requirements for valid consent" when a person must choose between providing their data for behavioral advertising purposes or pay for privacy.

The EDPB argues that only offering a paid alternative to data collection shouldn't be the default for large online platforms. It doesn't issue a mandate but stresses that these platforms should "give significant consideration" to providing a free option that doesn't involve data processing (or at least not as much). "Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy," EDPB Chair Anu Talus said. "Individuals should be made fully aware of the value and the consequences of their choices."

Currently, EU users must pay €10 ($11) monthly for an ad-free subscription or be forced to share their data. The EU is already investigating if this system complies with the Digital Markets Act, which went into effect at the beginning of March.

This article originally appeared on Engadget at https://www.engadget.com/eu-criticizes-metas-privacy-for-cash-business-model-103042528.html?src=rss

EU criticizes Meta’s ‘privacy for cash’ business model

The European Union doesn't think you should have to choose between giving Meta and other major players your data or your money. In a statement, the European Data Protection Board (EDPB) stated that "consent or pay" models often don't "comply with the requirements for valid consent" when a person must choose between providing their data for behavioral advertising purposes or pay for privacy.

The EDPB argues that only offering a paid alternative to data collection shouldn't be the default for large online platforms. It doesn't issue a mandate but stresses that these platforms should "give significant consideration" to providing a free option that doesn't involve data processing (or at least not as much). "Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy," EDPB Chair Anu Talus said. "Individuals should be made fully aware of the value and the consequences of their choices."

Currently, EU users must pay €10 ($11) monthly for an ad-free subscription or be forced to share their data. The EU is already investigating if this system complies with the Digital Markets Act, which went into effect at the beginning of March.

This article originally appeared on Engadget at https://www.engadget.com/eu-criticizes-metas-privacy-for-cash-business-model-103042528.html?src=rss

Creepy monitoring service sells searchable Discord user data for as little as $5

A data scraping service is selling information on what it claims to be 600 million Discord users. A report from 404 Media details Spy Pet, an online service that gathers, stores and sells troves of information from the social platform. But have no fear: It markets its services to totally trustworthy paying clients like law enforcement, AI model trainers or your average person curious about “what their friends are up to.” Why ask them when you can simply purchase and download a copy of their Discord activity?

For as little as $5 in cryptocurrency, Spy Pet lets you access data about specific users, such as which servers they participate in, what messages they’ve sent and when they joined or left voice channels. It claims to have information on an alleged 600 million users across 14,000 Discord servers and three billion messages.

As for what inspired Spy Pet, its creator suggested it’s a classic case of doing what one enjoys and pushing personal boundaries. “I like scraping, archiving, and challenging myself,” the creator told 404 Media. “Discord is basically the holy grail of scraping, since Discord is trying absolutely anything to combat scraping.”

Some people run a 5K, set a weight-loss goal or take up pickleball. Others start a social scraping service that sells data to the feds, AI companies and creepy exes. To each their own!

404 Media says the database lets you search for specific users. For each search result, a page shows the servers the user has joined (at least among those Spy Pet monitors), their connected accounts, a table showing their recent messages (including the server name, time stamps and the message itself) and their voice channel entry and exit times. Paying customers can conveniently export their prey’s — or “friend’s” — chats into a CSV file.

Discord says it’s investigating Spy Pet and weighing its options. “Discord is committed to protecting the privacy and data of our users,” a company spokesperson wrote in an email to Engadget. “We are currently investigating this matter. If we determine that violations of our Terms of Service and Community Guidelines have occurred, we will take appropriate steps to enforce our policies. We cannot provide further comments as this is an ongoing investigation.”

This article originally appeared on Engadget at https://www.engadget.com/creepy-monitoring-service-sells-searchable-discord-user-data-for-as-little-as-5-170228224.html?src=rss

Creepy monitoring service sells searchable Discord user data for as little as $5

A data scraping service is selling information on what it claims to be 600 million Discord users. A report from 404 Media details Spy Pet, an online service that gathers, stores and sells troves of information from the social platform. But have no fear: It markets its services to totally trustworthy paying clients like law enforcement, AI model trainers or your average person curious about “what their friends are up to.” Why ask them when you can simply purchase and download a copy of their Discord activity?

For as little as $5 in cryptocurrency, Spy Pet lets you access data about specific users, such as which servers they participate in, what messages they’ve sent and when they joined or left voice channels. It claims to have information on an alleged 600 million users across 14,000 Discord servers and three billion messages.

As for what inspired Spy Pet, its creator suggested it’s a classic case of doing what one enjoys and pushing personal boundaries. “I like scraping, archiving, and challenging myself,” the creator told 404 Media. “Discord is basically the holy grail of scraping, since Discord is trying absolutely anything to combat scraping.”

Some people run a 5K, set a weight-loss goal or take up pickleball. Others start a social scraping service that sells data to the feds, AI companies and creepy exes. To each their own!

404 Media says the database lets you search for specific users. For each search result, a page shows the servers the user has joined (at least among those Spy Pet monitors), their connected accounts, a table showing their recent messages (including the server name, time stamps and the message itself) and their voice channel entry and exit times. Paying customers can conveniently export their prey’s — or “friend’s” — chats into a CSV file.

Discord says it’s investigating Spy Pet and weighing its options. “Discord is committed to protecting the privacy and data of our users,” a company spokesperson wrote in an email to Engadget. “We are currently investigating this matter. If we determine that violations of our Terms of Service and Community Guidelines have occurred, we will take appropriate steps to enforce our policies. We cannot provide further comments as this is an ongoing investigation.”

This article originally appeared on Engadget at https://www.engadget.com/creepy-monitoring-service-sells-searchable-discord-user-data-for-as-little-as-5-170228224.html?src=rss

Meta is shutting down Threads in Turkey following injunction against data-sharing with Instagram

Meta is shutting down Threads in Turkey on April 29 after an interim injunction from the Turkish Competition Authority (TCA) against automatic data-sharing with Instagram. The TCA ruled that linking Threads and Instagram without user opt-in “will lead to irreparable harms” and that Meta “abused its dominant position” in the industry with the practice. The TCA also suggested that the linking exists primarily to increase the company’s “market power.”

Rather than make any changes to how Instagram and Threads integrate in the region, Meta’s pulling the nascent social media app. The company says this is merely a temporary measure as it works to appeal the injunction, but there’s no timetable for that. In the meantime, Meta suggests that users in Turkey either deactivate their accounts or delete them entirely. Those who deactivate will have their posts and interactions restored “if Threads returns” to the country.

Turkish regulators aren’t the only people who think the automatic linking between Threads and Instagram is, at best, a bit creepy. It’s been a point of contention since the platform launched last year. The apps were so tied together that users couldn’t even delete a Threads account without nuking their Instagram account, though Meta patched this several months back.

Meta also began promoting Threads posts on Facebook and Instagram without user consent, eventually allowing people to opt out of the, uh, “feature.” This is the type of automatic data-sharing that bristled the TCA, leading to the recent injunction.

Also, this isn’t the first regulatory battle between Meta and Turkey. The country fined Meta $18.6 million back in 2022 for data-sharing across its apps, according to a report by TechCrunch. This is an alleged violation of the country’s competition laws. The country asked Meta to submit documents detailing its efforts to stop violation of these laws, but Turkish regulators said the explanations were lacking. As such, the country slapped Meta with additional fines, to the tune of $160,000 each day.

This article originally appeared on Engadget at https://www.engadget.com/meta-is-shutting-down-threads-in-turkey-following-injunction-against-data-sharing-with-instagram-154725011.html?src=rss