The CIA stops publishing The World Factbook

The US Central Intelligence Agency is ending one of its popular services, The World Factbook. Over the decades, this reference has provided readers with information about different countries and communities around the world. The post from the CIA announcing the news didn't provide any information about why it will stop offering The World Factbook. The agency was subject to the same buyouts and job cuts that decimated much of the federal workforce in 2025, so maybe this type of public-facing tool is no longer a priority. 

This reference guide was first published in 1962 as The National Basic Intelligence Factbook. That original tome was classified, but as other government departments began using it, an unclassified version for the public was released in 1971. It became a digital resource on the CIA website in 1997.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-cia-stops-publishing-the-world-factbook-184419024.html?src=rss

DOJ and states appeal Google monopoly ruling to push for harsher penalties against the company

Google might have been officially ruled to have a monopoly, but we're still a long way from figuring out exactly what that determination will change at the tech company. Today, the US Department of Justice filed notice of a plan to cross-appeal the decision last fall that Google would not be required to sell off the its Chrome browser. The agency's Antitrust Division posted about the action on X. According to Bloomberg, a group of states is also joining the appeal filing. 

At the time of the 2025 ruling, the Justice Department had pushed for a Chrome sale to be part of the outcome. Judge Amit Mehta denied the request from the agency. "Plaintiffs overreached in seeking forced divesture of these key assets, which Google did not use to effect any illegal restraints," Mehta's decision stated. However, he did set other restrictions on Google's business activities, such as an end to exclusive deals for distributing some services and a requirement to share select search data with competitors.

Google has already filed its own appeal over this part of its ongoing antitrust battle. Of course, the tech giant is hoping to get off the hook with fewer penalties rather than the heavier ones the DOJ is seeking.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/doj-and-states-appeal-google-monopoly-ruling-to-push-for-harsher-penalties-against-the-company-235115249.html?src=rss

ChatGPT is back up after an outage disrupted use this afternoon

If you had trouble using ChatGPT today, you aren't alone. The AI chatbot experienced a partial outage for many users this afternoon, with Down Detector saw reports reaching more than 12,000 reports around the peak point of the issue today.. OpenAI issued a status update shortly after noting that "elevated error rates" were occurring for ChatGPT and Platform users. That problem was marked as resolved at 5:14PM ET.

While the initial outage may be repaired, OpenAI does still have an active status alert up. It's only for the fine-tuning component of its API service. But the end may also be in sight for that final issue, because the current statement from the company is "We have applied the mitigation and are monitoring the recovering.

Another AI chatbot, Anthropic’s Claude, also experienced an outage today. It listed similar issues with "Elevated error rate on API across all Claude models." That status was resolved by 1PM ET.

Update, February 3, 2025, 6:17PM ET: Updated to reflect the change in status and mention Claude outage.

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-is-back-up-after-an-outage-disrupted-use-this-afternoon-210238686.html?src=rss

Crunchyroll increases prices for all anime streaming plans

Anime fans won't be getting any respite from the streaming service price hikes that now feel inevitable on every platform every couple of years. Crunchyroll announced today that it will be increasing the monthly costs for all its plans by $2. That means the Fan tier will now run you $10 a month, the Mega Fan Tier is $14 a month and the Ultimate Fan Tier is $18 a month. 

The platform introduced its Mega Fan and Ultimate Fan options in 2020, with both at long last giving viewers an option to watch shows offline. The silver lining in today's price changes is that the Fan members are getting the same offline viewing option, although it's limited to one device. Crunchyroll is further enticing the people who might now be more interested in the Fan level by offering a discount on the annual plan for that tier; you can get a year's access for a limited time for $67.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/crunchyroll-increases-prices-for-all-anime-streaming-plans-234231265.html?src=rss

Moltbook, the AI social network, exposed human credentials due to vibe-coded security flaw

Moltbook bills itself as a social network for AI agents. That's a wacky enough concept in the first place, but the site apparently exposed the credentials for thousands of its human users. The flaw was discovered by cybersecurity firm Wiz, and its team assisted Moltbook with addressing the vulnerability.

The issue appears to be the result of the entire Reddit-style forum being vibe-coded; Moltbook's human founder posted a few days ago on X that he "didn't write one line of code" for the platform and instead directed an AI assistant to create the whole setup. 

According to the blog post from Wiz analyzing the issue, Moltbook had a vulnerability that allowed for "1.5 million API authentication tokens, 35,000 email addresses and private messages between agents" to be fully read and accessed. Wiz also found that the vulnerability could let unauthenticated human users edit live Moltbook posts. In other words, there is no way to verify whether a Moltbook post was authored by an AI agent or a human user posing as one. "The revolutionary AI social network was largely humans operating fleets of bots," the company's analysis concluded. 

So ends another cautionary tale reminding us that just because AI can do a task doesn’t mean it'll do it correctly.

This article originally appeared on Engadget at https://www.engadget.com/ai/moltbook-the-ai-social-network-exposed-human-credentials-due-to-vibe-coded-security-flaw-230324567.html?src=rss

Ubisoft fires employee who publicly criticized its RTO plan

Ubisoft continues to raise eyebrows around how it is treating employees as it attempts a business overhaul. David Michaud-Cromp, a level design team lead at Ubisoft Montreal, said last week that he was suspended for three days without pay after voicing opposition to the company's return to office mandate. Today, Michaud-Cromp posted on LinkedIn that he has been fired. "I was terminated by Ubisoft, effective immediately," he wrote. "This was not my decision."

A spokesperson for Ubisoft gave Kotaku the following statement regarding Michaud-Cromp's dismissal: "Sharing feedback or opinions respectfully does not lead to a dismissal. We have a clear Code of Conduct that outlines our shared expectations for working together safely and respectfully, which employees review and sign each year. When that is breached, our established procedures apply, including an escalation of measures depending on the nature, severity, and repetition of the breach." We've reached out to the company for additional confirmation and comment. 

This is the latest in a sequence of bad press Ubisoft has faced regarding its workforce. Shortly after many employees at Ubisoft Halifax unionized, the parent company shut down the studio. In announcing the closure, Ubisoft said the move was part of a broader cost-cutting endeavor across its operations; it shut down a support studio and cut more jobs later in January, with even more layoffs proposed. Most recently, unions representing other Ubisoft workers called for a three-day strike in response to the "penny-pinching and worsening our working conditions" they alleged of the company's management.

All these issues could all be coincidental timing. But if so, they're coincidences that don't reflect favorably on Ubisoft.

This article originally appeared on Engadget at https://www.engadget.com/gaming/ubisoft-fires-employee-who-publicly-criticized-its-rto-plan-220913747.html?src=rss

France might seek restrictions on VPN use in campaign to keep minors off social media

France may take additional steps to prevent minors from accessing social media platforms. As its government advances a proposed ban on social media use for anyone under age 15, some leaders are already looking to add further restrictions. During an appearance on public broadcast service Franceinfo, Minister Delegate for Artificial Intelligence and Digital Affairs Anne Le Hénanff said VPNs might be the next target. 

"If [this legislation] allows us to protect a very large majority of children, we will continue. And VPNs are the next topic on my list," she said.

A virtual private network would potentially allow French citizens younger than 15 to circumnavigate the social media ban. We've already seen VPN's experience a popularity spike in the UK last year after similar laws were passed over age-gating content. However, a VPN also offers benefits for online privacy, and introducing age verification requirements where your personal data must be submitted negates a large part of these services' appeal. 

The French social media ban is still a work in progress. France's National Assembly voted in favor of the restrictions last week with a result of 116-23, moving it ahead for discussion in the country's Senate. While a single comment doesn't mean that France will in fact ban VPNs for any demographic, it does point to the direction some of the country's leaders want to take. Critics responded to Le Hénanff's statements with worry that these attempts at protective measures were veering into an authoritarian direction. 

The actions in France echo several other legislative pushes around the world aimed at reducing children and teens' access to social media and other potentially sensitive content online. The US had seen 25 state-level laws for age verification introduced in the past two years, which has created a new set of concerns around users' privacy and personal data, particularly when there has been no attempt to standardize how that information will be collected or protected. When data breaches at large corporations are already all too common, it's hard to trust that the individual sites and services that suddenly need to build an age verification process won't be an easy target for hacks.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/france-might-seek-restrictions-on-vpn-use-in-campaign-to-keep-minors-off-social-media-205308716.html?src=rss

Apple just reported its best-ever quarter for iPhone sales

Apple shared its latest quarterly financial results today and the news is once again very, very good for the Cupertino company. The quarter ending December 27, 2025 marked "the best-ever quarter" for iPhones, which generated a record high revenue of nearly $85.27 billion for the business. Apple doesn't disclose the number of devices sold any more, but even with the prices for many of its latest generation of smartphones surpassing $1,000 a pop, that's still got to be a heck of a lot of iPhones. 

"The demand for iPhone was simply staggering," CEO Tim Cook said on the conference call to discuss the results. "This is the strongest iPhone lineup we've ever had and by far the most popular."

That wasn't the only massive number in the earnings report. Services revenue also logged its biggest quarter yet, growing 14 percent over the same period last year to reach just over $30 billion. It was also Apple's biggest quarter to date for total revenue, which was nearly $143.76 billion for the already fabulously wealthy company.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/apple-just-reported-its-best-ever-quarter-for-iphone-sales-234135513.html?src=rss

Amazon discovered a ‘high volume’ of CSAM in its AI training data but isn’t saying where it came from

The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The "vast majority" of that content was reported by Amazon, which found the material in its training data, according to an investigation by Bloomberg. In addition, Amazon said only that it obtained the inappropriate content from external sources used to train its AI services and claimed it could not provide any further details about where the CSAM came from. 

Amazon provided Engadget with the following statement to explain why it doesn’t have data that can provide any further action on what it found.

“When we set up this reporting channel in 2024, we informed NCMEC that we would not have sufficient information to create actionable reports, because of the third-party nature of the scanned data. The separate channel ensures that these reports would not dilute the efficacy of our other reporting channels. Because of how this data is sourced, we don't have the data that comprises an actionable report.”

"This is really an outlier," Fallon McNulty, executive director of NCMEC’s CyberTipline, told Bloomberg. The CyberTipline is where many types of US-based companies are legally required to report suspected CSAM. “Having such a high volume come in throughout the year begs a lot of questions about where the data is coming from, and what safeguards have been put in place.” She added that aside from Amazon, the AI-related reports the organization received from other companies last year included actionable data that it could pass along to law enforcement for next steps. Since Amazon isn’t disclosing sources, McNulty said its reports have proved “inactionable.”

Amazon provided Engadget with these additional details, which were first reported in Bloomberg:

“Amazon is committed to preventing CSAM across all of its businesses, and we are not aware of any instances of our models generating CSAM. In accordance with our commitments to responsible AI and the Generative AI Principles to Prevent Child Abuse, we take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known CSAM and protect our customers. While our proactive safeguards cannot provide the same detail in NCMEC reports as consumer-facing tools, we stand by our commitment to responsible AI and will continue our work to prevent CSAM.”

The company also reiterated that “we intentionally use an over-inclusive threshold for scanning, which yields a high percentage of false positives” to explain the high volume of content the company reported.

Safety questions for minors have emerged as a critical concern for the artificial intelligence industry in recent months. CSAM has skyrocketed in NCMEC's records; compared with the more than 1 million AI-related reports the organization received last year, the 2024 total was 67,000 reports while 2023 only saw 4,700 reports. 

In addition to issues such as abusive content being used to train models, AI chatbots have also been implicated in several dangerous or tragic cases involving young users. OpenAI and Character.AI have both been sued after teenagers planned their suicides with those companies' platforms. Meta is also being sued for alleged failures to protect teen users from sexually explicit conversations with chatbots.

Update, January 30, 2026, 11:05AM ET: This story has been updated with several statements from Amazon.

This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-discovered-a-high-volume-of-csam-in-its-ai-training-data-but-isnt-saying-where-it-came-from-224749228.html?src=rss

Publishers are blocking the Internet Archive for fear AI scrapers can use it as a workaround

The Internet Archive has often been a valuable resource for journalists, from it's finding records of deleted tweets or providing academic texts for background research. However, the advent of AI has created a new tension between the parties. A few major publications have begun blocking the nonprofit digital library's access to their content based on concerns that AI companies' bots are using the Internet Archive's collections to indirectly scrape their articles.

"A lot of these AI businesses are looking for readily available, structured databases of content," Robert Hahn, head of business affairs and licensing for The Guardian, told Nieman Lab. "The Internet Archive’s API would have been an obvious place to plug their own machines into and suck out the IP."

The New York Times took a similar step. "We are blocking the Internet Archive's bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization," a representative from the newspaper confirmed to Nieman Lab. Subscription-focused publication the Financial Times and social forum Reddit have also made moves to selectively block how the Internet Archive catalogs their material.

Many publishers have attempted to sue AI businesses for how they access content used to train large language models. To name a few just from the realm of journalism:

Other media outlets have sought financial deals before offering up their libraries as training material, although those arrangements seem to provide compensation to the publishing companies rather than the writers. And that's not even delving into the copyright and piracy issues also being fought against AI tools by other creative fields, from fiction writers to visual artists to musicians. The whole Nieman Lab story is well worth a read for anyone who has been following any of these creative industries’ responses to artificial intelligence.

This article originally appeared on Engadget at https://www.engadget.com/ai/publishers-are-blocking-the-internet-archive-for-fear-ai-scrapers-can-use-it-as-a-workaround-204001754.html?src=rss