Moltbook, the AI social network, exposed human credentials due to vibe-coded security flaw

Moltbook bills itself as a social network for AI agents. That's a wacky enough concept in the first place, but the site apparently exposed the credentials for thousands of its human users. The flaw was discovered by cybersecurity firm Wiz, and its team assisted Moltbook with addressing the vulnerability.

The issue appears to be the result of the entire Reddit-style forum being vibe-coded; Moltbook's human founder posted a few days ago on X that he "didn't write one line of code" for the platform and instead directed an AI assistant to create the whole setup. 

According to the blog post from Wiz analyzing the issue, Moltbook had a vulnerability that allowed for "1.5 million API authentication tokens, 35,000 email addresses and private messages between agents" to be fully read and accessed. Wiz also found that the vulnerability could let unauthenticated human users edit live Moltbook posts. In other words, there is no way to verify whether a Moltbook post was authored by an AI agent or a human user posing as one. "The revolutionary AI social network was largely humans operating fleets of bots," the company's analysis concluded. 

So ends another cautionary tale reminding us that just because AI can do a task doesn’t mean it'll do it correctly.

This article originally appeared on Engadget at https://www.engadget.com/ai/moltbook-the-ai-social-network-exposed-human-credentials-due-to-vibe-coded-security-flaw-230324567.html?src=rss

Ubisoft fires employee who publicly criticized its RTO plan

Ubisoft continues to raise eyebrows around how it is treating employees as it attempts a business overhaul. David Michaud-Cromp, a level design team lead at Ubisoft Montreal, said last week that he was suspended for three days without pay after voicing opposition to the company's return to office mandate. Today, Michaud-Cromp posted on LinkedIn that he has been fired. "I was terminated by Ubisoft, effective immediately," he wrote. "This was not my decision."

A spokesperson for Ubisoft gave Kotaku the following statement regarding Michaud-Cromp's dismissal: "Sharing feedback or opinions respectfully does not lead to a dismissal. We have a clear Code of Conduct that outlines our shared expectations for working together safely and respectfully, which employees review and sign each year. When that is breached, our established procedures apply, including an escalation of measures depending on the nature, severity, and repetition of the breach." We've reached out to the company for additional confirmation and comment. 

This is the latest in a sequence of bad press Ubisoft has faced regarding its workforce. Shortly after many employees at Ubisoft Halifax unionized, the parent company shut down the studio. In announcing the closure, Ubisoft said the move was part of a broader cost-cutting endeavor across its operations; it shut down a support studio and cut more jobs later in January, with even more layoffs proposed. Most recently, unions representing other Ubisoft workers called for a three-day strike in response to the "penny-pinching and worsening our working conditions" they alleged of the company's management.

All these issues could all be coincidental timing. But if so, they're coincidences that don't reflect favorably on Ubisoft.

This article originally appeared on Engadget at https://www.engadget.com/gaming/ubisoft-fires-employee-who-publicly-criticized-its-rto-plan-220913747.html?src=rss

France might seek restrictions on VPN use in campaign to keep minors off social media

France may take additional steps to prevent minors from accessing social media platforms. As its government advances a proposed ban on social media use for anyone under age 15, some leaders are already looking to add further restrictions. During an appearance on public broadcast service Franceinfo, Minister Delegate for Artificial Intelligence and Digital Affairs Anne Le Hénanff said VPNs might be the next target. 

"If [this legislation] allows us to protect a very large majority of children, we will continue. And VPNs are the next topic on my list," she said.

A virtual private network would potentially allow French citizens younger than 15 to circumnavigate the social media ban. We've already seen VPN's experience a popularity spike in the UK last year after similar laws were passed over age-gating content. However, a VPN also offers benefits for online privacy, and introducing age verification requirements where your personal data must be submitted negates a large part of these services' appeal. 

The French social media ban is still a work in progress. France's National Assembly voted in favor of the restrictions last week with a result of 116-23, moving it ahead for discussion in the country's Senate. While a single comment doesn't mean that France will in fact ban VPNs for any demographic, it does point to the direction some of the country's leaders want to take. Critics responded to Le Hénanff's statements with worry that these attempts at protective measures were veering into an authoritarian direction. 

The actions in France echo several other legislative pushes around the world aimed at reducing children and teens' access to social media and other potentially sensitive content online. The US had seen 25 state-level laws for age verification introduced in the past two years, which has created a new set of concerns around users' privacy and personal data, particularly when there has been no attempt to standardize how that information will be collected or protected. When data breaches at large corporations are already all too common, it's hard to trust that the individual sites and services that suddenly need to build an age verification process won't be an easy target for hacks.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/france-might-seek-restrictions-on-vpn-use-in-campaign-to-keep-minors-off-social-media-205308716.html?src=rss

Apple just reported its best-ever quarter for iPhone sales

Apple shared its latest quarterly financial results today and the news is once again very, very good for the Cupertino company. The quarter ending December 27, 2025 marked "the best-ever quarter" for iPhones, which generated a record high revenue of nearly $85.27 billion for the business. Apple doesn't disclose the number of devices sold any more, but even with the prices for many of its latest generation of smartphones surpassing $1,000 a pop, that's still got to be a heck of a lot of iPhones. 

"The demand for iPhone was simply staggering," CEO Tim Cook said on the conference call to discuss the results. "This is the strongest iPhone lineup we've ever had and by far the most popular."

That wasn't the only massive number in the earnings report. Services revenue also logged its biggest quarter yet, growing 14 percent over the same period last year to reach just over $30 billion. It was also Apple's biggest quarter to date for total revenue, which was nearly $143.76 billion for the already fabulously wealthy company.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/apple-just-reported-its-best-ever-quarter-for-iphone-sales-234135513.html?src=rss

Amazon discovered a ‘high volume’ of CSAM in its AI training data but isn’t saying where it came from

The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The "vast majority" of that content was reported by Amazon, which found the material in its training data, according to an investigation by Bloomberg. In addition, Amazon said only that it obtained the inappropriate content from external sources used to train its AI services and claimed it could not provide any further details about where the CSAM came from. 

Amazon provided Engadget with the following statement to explain why it doesn’t have data that can provide any further action on what it found.

“When we set up this reporting channel in 2024, we informed NCMEC that we would not have sufficient information to create actionable reports, because of the third-party nature of the scanned data. The separate channel ensures that these reports would not dilute the efficacy of our other reporting channels. Because of how this data is sourced, we don't have the data that comprises an actionable report.”

"This is really an outlier," Fallon McNulty, executive director of NCMEC’s CyberTipline, told Bloomberg. The CyberTipline is where many types of US-based companies are legally required to report suspected CSAM. “Having such a high volume come in throughout the year begs a lot of questions about where the data is coming from, and what safeguards have been put in place.” She added that aside from Amazon, the AI-related reports the organization received from other companies last year included actionable data that it could pass along to law enforcement for next steps. Since Amazon isn’t disclosing sources, McNulty said its reports have proved “inactionable.”

Amazon provided Engadget with these additional details, which were first reported in Bloomberg:

“Amazon is committed to preventing CSAM across all of its businesses, and we are not aware of any instances of our models generating CSAM. In accordance with our commitments to responsible AI and the Generative AI Principles to Prevent Child Abuse, we take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known CSAM and protect our customers. While our proactive safeguards cannot provide the same detail in NCMEC reports as consumer-facing tools, we stand by our commitment to responsible AI and will continue our work to prevent CSAM.”

The company also reiterated that “we intentionally use an over-inclusive threshold for scanning, which yields a high percentage of false positives” to explain the high volume of content the company reported.

Safety questions for minors have emerged as a critical concern for the artificial intelligence industry in recent months. CSAM has skyrocketed in NCMEC's records; compared with the more than 1 million AI-related reports the organization received last year, the 2024 total was 67,000 reports while 2023 only saw 4,700 reports. 

In addition to issues such as abusive content being used to train models, AI chatbots have also been implicated in several dangerous or tragic cases involving young users. OpenAI and Character.AI have both been sued after teenagers planned their suicides with those companies' platforms. Meta is also being sued for alleged failures to protect teen users from sexually explicit conversations with chatbots.

Update, January 30, 2026, 11:05AM ET: This story has been updated with several statements from Amazon.

This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-discovered-a-high-volume-of-csam-in-its-ai-training-data-but-isnt-saying-where-it-came-from-224749228.html?src=rss

Publishers are blocking the Internet Archive for fear AI scrapers can use it as a workaround

The Internet Archive has often been a valuable resource for journalists, from it's finding records of deleted tweets or providing academic texts for background research. However, the advent of AI has created a new tension between the parties. A few major publications have begun blocking the nonprofit digital library's access to their content based on concerns that AI companies' bots are using the Internet Archive's collections to indirectly scrape their articles.

"A lot of these AI businesses are looking for readily available, structured databases of content," Robert Hahn, head of business affairs and licensing for The Guardian, told Nieman Lab. "The Internet Archive’s API would have been an obvious place to plug their own machines into and suck out the IP."

The New York Times took a similar step. "We are blocking the Internet Archive's bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization," a representative from the newspaper confirmed to Nieman Lab. Subscription-focused publication the Financial Times and social forum Reddit have also made moves to selectively block how the Internet Archive catalogs their material.

Many publishers have attempted to sue AI businesses for how they access content used to train large language models. To name a few just from the realm of journalism:

Other media outlets have sought financial deals before offering up their libraries as training material, although those arrangements seem to provide compensation to the publishing companies rather than the writers. And that's not even delving into the copyright and piracy issues also being fought against AI tools by other creative fields, from fiction writers to visual artists to musicians. The whole Nieman Lab story is well worth a read for anyone who has been following any of these creative industries’ responses to artificial intelligence.

This article originally appeared on Engadget at https://www.engadget.com/ai/publishers-are-blocking-the-internet-archive-for-fear-ai-scrapers-can-use-it-as-a-workaround-204001754.html?src=rss

Apple acquires Q.ai for a reported $2 billion

Apple has acquired Israel-based startup Q.ai, a move that could provide a much-needed boost to the tech giant's capabilities in artificial intelligence. Although Apple has not disclosed terms of the deal, sources told Financial Times that the arrangement is reportedly valued at nearly $2 billion. If that figure is accurate, the Q.ai acquisition marks Apple's second largest acquisition to date, followed by its purchase of Beats for $3 billion back in 2014.

Johny Srouji, Apple’s senior vice president of hardware technologies, said in a statement that Q.ai "is a remarkable company that is pioneering new and creative ways to use imaging and machine learning." Apple hasn't shared any specifics about how it plans to leverage the startup, but its past work indicates the possibility of Apple moving deeper into AI-powered wearables. "Patents filed by Q.ai show its technology being used in headphones or glasses, using 'facial skin micro movements' to communicate without talking," the Times reported. 

The startup's founding team, including CEO Aviad Maizels, will join Apple as part of the deal. This acquisition marks Maizels' second sale to Apple; he previously founded a three-dimensional hearing business called PrimeSense that Apple bought back in 2013.

For several months, many tech insiders have speculated that an acquisition might be Apple's best path forward to catching up in the AI race. In the company's Q3 earnings call in July 2025, CEO Tim Cook acknowledged that "We’re open to M&A that accelerates our roadmap." A deal like this one could eventually lead to Apple developing its own fully in-house AI chatbot rather than relying on a competitor like Google to power artificial intelligence in its Siri assistant.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/apple-acquires-qai-for-a-reported-2-billion-190017949.html?src=rss

Halide co-founder joins Apple’s design team

Apple picked up an intriguing new member for its design team today in Sebastiaan de With, co-founder of the iPhone camera app Halide. He announced the move today on Threads, adding, "So excited to work with the very best team in the world on my favorite products."

The Halide app has caught our eye at Engadget at several points over the years. de With also is co-founder of Lux, which is Halide's parent company. The other Lux apps also have an emphasis on photography and videography, particularly on Apple devices. Prior to Halide, de With had done other work at Apple, collaborating on properties including iCloud, MobileMe and Find My apps. It's unclear if his exit will mean any notable changes for Halide, or for the Lux apps Kino, Spectre and Orion.

For a long time, Apple's design philosophy was personified by Jony Ive, who left the company in 2022. Since his departure, no single person has emerged as the face and voice of Apple's attitude toward design, which could be why recent moves such as Liquid Glass have been met with deeply divided reactions.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/halide-co-founder-joins-apples-design-team-235023416.html?src=rss

Astronomers share new insights about the early universe via the Webb Space Telescope

Researchers using the James Webb Space Telescope have found a galaxy that is offering new data about the early stages of the universe's existence. The latest discovery shared by astronomers is about a bright galaxy dubbed MoM-z14. According to the team, this galaxy existed 280 million years after the Big Bang. 

The sounds like a long time, but in the context of the universe's estimated 13.8 billion years of existence, that's actually one of the closest examples astronomers have found to the Big Bang's occurrence. As a result, MoM-z14 can offer some insights and some surprises about what the early stages of the universe entailed.

"With Webb, we are able to see farther than humans ever have before, and it looks nothing like what we predicted, which is both challenging and exciting," lead author Rohan Naidu of Massachusetts Institute of Technology said. The findings about this galaxy were published in the Open Journal of Astrophysics.

The scientists were able to date MoM-z14 with Webb's Near-Infrared Spectrograph instrument, analyzing how light from the galaxy changed wavelengths as it traveled to reach the telescope. One of the initial questions sparked by this bright galaxy centers on the presence of nitrogen. Some early galaxies, including MoM-z14, have revealed higher nitrogen concentrations than scientists had projected was possible. Another topic of interest is about reionization, or the process of stars producing enough light or energy to permeate the dense hydrogen fog that existed in the early universe. 

“It’s an incredibly exciting time, with Webb revealing the early Universe like never before and showing us how much there still is to discover” said Pennsylvania State University graduate student and team member Yijia Li.

This article originally appeared on Engadget at https://www.engadget.com/science/space/astronomers-share-new-insights-about-the-early-universe-via-the-webb-space-telescope-213311848.html?src=rss

Mark Zuckerberg was initially opposed to parental controls for AI chatbots, according to legal filing

Meta has faced some serious questions about how it allows its underage users to interact with AI-powered chatbots. Most recently, internal communications obtained by the New Mexico Attorney General's Office revealed that although Meta CEO Mark Zuckerberg was opposed to the chatbots having "explicit" conversations with minors, he also rejected the idea of placing parental controls on the feature.

Reuters reported that in an exchange between two unnamed Meta employees, one wrote that we "pushed hard for parental controls to turn GenAI off – but GenAI leadership pushed back stating Mark decision.” In its statement to the publication, Meta accused the New Mexico Attorney General of "cherry picking documents to paint a flawed and inaccurate picture." New Mexico is suing Meta on charges that the company “failed to stem the tide of damaging sexual material and sexual propositions delivered to children;” the case is scheduled to go to trial in February.

Despite only being available for a brief time, Meta's chatbots have already accumulated quite a history of behavior that veers into offensive if not outright illegal. In April 2025, The Wall Street Journal released an investigation that found Meta's chatbots could engage in fantasy sex conversations with minors, or could be directed to mimic a minor and engage in sexual conversation. The report claimed that Zuckerberg had wanted looser guards implemented around Meta's chatbots, but a spokesperson denied that the company had overlooked protections for children and teens. 

Internal review documents revealed in August 2025 detailed several hypothetical situations of what chatbot behaviors would be permitted, and the lines between sensual and sexual seemed pretty hazy. The document also permitted the chatbots to argue racist concepts. At the time, a representative told Engadget that the offending passages were hypotheticals rather than actual policy, which doesn't really seem like much of an improvement, and that they were removed from the document. 

Despite the multiple instances of questionable use of the chatbots, Meta only decided to suspend teen accounts' access to them last week. The company said it is temporarily removing access while it develops the parental controls that Zuckerberg had allegedly rejected using.

"Parents have long been able to see if their teens have been chatting with AIs on Instagram, and in October we announced our plans to go further, building new tools to give parents more control over their teens’ experiences with AI characters," a representative from Meta said. "Last week we once again reinforced our commitment to delivering on our promise of parental controls for AI, pausing teen access to AI characters completely until the updated version is ready."

New Mexico filed this lawsuit against Meta in December 2023 on claims that the company's platforms failed to protect minors from harassment by adults. Internal documents revealed early on in that complaint revealed that 100,000 child users were harassed daily on Meta's services.

Update, January 27, 2025, 6:52PM ET: Added statement from Meta spokesperson.

Update, January 27, 2025, 6:15PM ET: Corrected misstated timeline of the New Mexico lawsuit, which was filed in December 2023, not December 2024.

This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-was-initially-opposed-to-parental-controls-for-ai-chatbots-according-to-legal-filing-230110214.html?src=rss