OpenAI releases GPT-5.2 to take on Google and Anthropic

OpenAI's "code red" response to Google's Gemini 3 Pro has arrived. On the same day the company announced a Sora licensing pact with Disney, it took the wraps off GPT-5.2. OpenAI is touting the new model as its best yet for real-world, professional use. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects,” said OpenAI.

In a series of 10 benchmarks highlighted by OpenAI, GPT-5.2 Thinking, the most advanced version of the model, outperformed its GPT-5.1 counterpart, sometimes by a significant margin. For example, in AIME 2025, a test that involves 30 challenging mathematics problems, the model earned a perfect 100 percent score, beating out GPT-5.1’s already state-of-the-art score of 94 perfect. It also achieved that feat without turning to tools like web search. Meanwhile, in ARC-AGI-1, a benchmark that tests an AI system’s ability to reason abstractly like a human being would, the new system beat GPT-5.1’s score by more than 10 percentage points.

OpenAI says GPT-5.2 Thinking is better at answering questions factually, with the company finding it produces errors 30 percent less frequently. “For professionals, this means fewer mistakes when using the model for research, writing, analysis, and decision support — making the model more dependable for everyday knowledge work,” the company said.

The new model should be better in conversation too. Of the version of the system most users are likely to encounter, OpenAI says “GPT‑5.2 Instant is a fast, capable workhorse for everyday work and learning, with clear improvements in info-seeking questions, how-tos and walk-throughs, technical writing, and translation, building on the warmer conversational tone introduced in GPT‑5.1 Instant.“

While it's probably overstating things to suggest this is a make or break release for OpenAI, it is fair to say the company does have a lot riding on GPT 5.2. Its big release of 2025, GPT-5, didn't meet expectations. Users complained of a system that generated surprisingly dumb answers and had a boring personality. The disappointment with GPT-5 was such that people began demanding OpenAI bring back GPT-4o.      

Then came Gemini 3 Pro — which jumped to the top of LMArena, a website where humans rate outputs from AI systems to vote on the best one. Following Google's announcement, Sam Altman reportedly called for a "code red" effort to improve ChatGPT. Before today, the company's previous model, GPT-5.1, was ranked sixth on LMArena, with systems from Anthropic and Elon Musk's xAI occupying the spots between OpenAI between Google. 

For a company that recently signed more than $1.4 trillion worth of infrastructure deals in a bid to outscale the competition, that was not a good position for OpenAI to be in. In his memo to staff, Altman said GPT-5.2 would be the equal of Gemini 3 Pro. With the new system rolling out now, we'll see whether that's true, and what it might mean for the company if it can't at least match Google's best.     

OpenAI is offering three different versions of GPT-5.2: Instant, Thinking and Pro. All three models will be first available to users on the company’s paid plans. Notably, the company plans to keep GPT-5.1 around, at least for a little while. Paid users can continue to use the older model for the next three months by selecting it from the legacy models section.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-releases-gpt-52-to-take-on-google-and-anthropic-185029007.html?src=rss

Lawsuit accuses ChatGPT of reinforcing delusions that led to a woman’s death

OpenAI has been hit with a wrongful death lawsuit after a man killed his mother and took his own life back in August, according to a report by The Verge. The suit names CEO Sam Altman and accuses ChatGPT of putting a "target" on the back of victim Suzanne Adams, an 83-year-old woman who was killed in her home.

The victim's estate claims the killer, 56-year-old Stein-Erik Soelberg, engaged in delusion-soaked conversations with ChatGPT in which the bot "validated and magnified" certain "paranoid beliefs." The suit goes on to suggest that the chatbot "eagerly accepted" delusional thoughts leading up to the murder and egged him on every step of the way.

The lawsuit claims the bot helped create a "universe that became Stein-Erik’s entire life—one flooded with conspiracies against him, attempts to kill him, and with Stein-Erik at the center as a warrior with divine purpose." ChatGPT allegedly reinforced theories that he was "100% being monitored and targeted" and was "100% right to be alarmed."

The chatbot allegedly agreed that the victim's printer was spying on him, suggesting that Adams could have been using it for "passive motion detection" and "behavior mapping." It went so far as to say that she was "knowingly protecting the device as a surveillance point" and implied she was being controlled by an external force.

The chatbot also allegedly "identified other real people as enemies." These included an Uber Eats driver, an AT&T employee, police officers and a woman the perpetrator went on a date with. Throughout this entire period, the bot repeatedly assured Soelberg that he was "not crazy" and that the "delusion risk" was "near zero."

The lawsuit notes that Soelberg primarily interfaced with GPT-4o, a model notorious for its sycophancy. OpenAI later replaced the model with the slightly-less agreeable GPT 5, but users revolted so the old bot came back just two days later. The suit also suggests that the company "loosened critical safety guardrails" when making GPT-4o to better compete with Google Gemini.

"OpenAI has been well aware of the risks their product poses to the public," the lawsuit states. "But rather than warn users or implement meaningful safeguards, they have suppressed evidence of these dangers while waging a PR campaign to mislead the public about the safety of their products."

OpenAI has responded to the suit, calling it an "incredibly heartbreaking situation." Company spokesperson Hannah Wong told The Verge that it will "continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress."

It's not really a secret that chatbots, and particularly GPT-4o, can reinforce delusional thinking. That's what happens when something has been programmed to agree with the end user no matter what. There have been other stories like this throughout the past year, bringing the term "AI psychosis" to the mainstream.

One such story involves 16-year-old Adam Raine, who took his own life after discussing it with GPT-4o for months. OpenAI is facing another wrongful death suit for that incident, in which the bot has been accused of helping Raine plan his suicide.

This article originally appeared on Engadget at https://www.engadget.com/ai/lawsuit-accuses-chatgpt-of-reinforcing-delusions-that-led-to-a-womans-death-183141193.html?src=rss

The year age verification laws came for the open internet

When the nonprofit Freedom House recently published its annual report, it noted that 2025 marked the 15th straight year of decline for global internet freedom. The biggest decline, after Georgia and Germany, came within the United States.

Among the culprits cited in the report: age verification laws, dozens of which have come into effect over the last year. "Online anonymity, an essential enabler for freedom of expression, is entering a period of crisis as policymakers in free and autocratic countries alike mandate the use of identity verification technology for certain websites or platforms, motivated in some cases by the legitimate aim of protecting children," the report warns.

Age verification laws are, in some ways, part of a years-long reckoning over child safety online, as tech companies have shown themselves unable to prevent serious harms to their most vulnerable users. Lawmakers, who have failed to pass data privacy regulations, Section 230 reform or any other meaningful legislation that would thoughtfully reimagine what responsibilities tech companies owe their users, have instead turned to the blunt tool of age-based restrictions — and with much greater success.  

Over the last two years, 25 states have passed laws requiring some kind of age verification to access adult content online. This year, the Supreme Court delivered a major victory to backers of age verification standards when it upheld a Texas law requiring sites hosting adult content to check the ages of their users.

Age checks have also expanded to social media and online platforms more broadly. Sixteen states now have laws requiring parental controls or other age-based restrictions for social media services. (Six of these measures are currently in limbo due to court challenges.) A federal bill to ban kids younger than 13 from social media has gained bipartisan support in Congress. Utah, Texas and Louisiana passed laws requiring app stores to check the ages of their users, all of which are set to go into effect next year. California plans to enact age-based rules for app stores in 2027.

These laws have started to fragment the internet. Smaller platforms and websites that don't have the resources to pay for third-party verification services may have no choice but to exit markets where age checks are required. Blogging service Dreamwidth pulled out of Mississippi after its age verification laws went into effect, saying that the $10,000 per user fines it could face were an "existential threat" to the company. Bluesky also opted to go dark in Mississippi rather than comply. (The service has complied with age verification laws in South Dakota and Wyoming, as well as the UK.) Pornhub, which has called existing age verification laws "haphazard and dangerous," has blocked access in 23 states

Pornhub is not an outlier in its assessment. Privacy advocates have long warned that age verification laws put everyone's privacy at risk. Practically, there's no way to limit age verification standards only to minors. Confirming the ages of everyone under 18 means you have to confirm the ages of everyone. In practice, this often means submitting a government-issued ID or allowing an app to scan your face. Both are problematic and we don't need to look far to see how these methods can go wrong. 

Discord recently revealed that around 70,000 users "may" have had their government IDs leaked due to an "incident" involving a third-party vendor the company contracts with to provide customer service related to age verification. Last year, another third-party identity provider that had worked with TikTok, Uber and other services exposed drivers' licenses. As a growing number of platforms require us to hand over an ID, these kinds of incidents will likely become even more common. 

Similar risks exist for face scans. Because most minors don't have official IDs, platforms often rely on AI-based tools that can guess users' ages. A face scan may seem more private than handing over a social security number, but we could be turning over far more information than we realize, according to experts at the Electronic Frontier Foundation (EFF).

"When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics," the organization notes. "A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us."

These issues aren't limited to the United States. Australia, Denmark and Malaysia have taken steps to ban younger teens from social media entirely. Officials in France are pushing for a similar ban, as well as a "curfew" for older teens. These measures would also necessitate some form of age verification in order to block the intended users. In the UK, where the Online Safety Act went into effect earlier this year, we've already seen how well-intentioned efforts to protect teens from supposedly harmful content can end up making large swaths of the internet more difficult to access. 

The law is ostensibly meant to "prevent young people from encountering harmful content relating to suicide, self-harm, eating disorders and pornography," according to the BBC. But the law has also resulted in age checks that reach far beyond porn sites. Age verification is required, in some cases, to access music videos and other content on Spotify. It will soon be required for Xbox accounts. On X, videos of protests have been blocked. Redditors have reported being blocked from a lengthy number of subreddits that are marked NSFW but don't actually host porn, including those related to menstruation, news and addiction recovery. Wikipedia, which recently lost a challenge to be excluded from the law's strictest requirements, is facing the prospect of being forced to verify the ages of its UK contributors, which the organization has said could have disastrous consequences. 

The UK law has also shown how ineffective existing age verification methods are. Users have been able to circumvent the checks by using selfies of video game characters, AI-generated images of ID documents and, of course, Virtual Private Networks (VPNs). 

As the EFF notes, VPNs are incredibly widely used. The software allows people to browse the internet while masking their actual location. They're used by activists and students and people who want to get around geoblocks built into streaming services. Many universities and businesses (including Engadget parent company Yahoo) require their students and workers to use VPNs in order to access certain information. Blocking VPNs would have serious repercussions for all of these groups. 

The makers of several popular VPN services reported major spikes in the UK following the Online Safety Act going into effect this summer, with ProtonVPN reporting a 1,400 percent surge in sign-ups. That's also led to fears of a renewed crackdown on VPNs. Ofcom, the regulator tasked with enforcing the law, told TechRadar it was "monitoring" VPN usage, which has further fueled speculation it could try to ban or restrict their use. And here in the States, lawmakers in Wisconsin have proposed an age verification law that would require sites that host "harmful" content to also block VPNs.

While restrictions on VPNs are, for now, mostly theoretical, the fact that such measures are even being considered is alarming. Up to now, VPN bans are more closely associated with authoritarian countries without an open internet, like Russia and China. If we continue down a path of trying to put age gates up around every piece of potentially objectionable content, the internet could get a lot worse for everyone. 

Correction, December 9, 2025, 11:23AM PT: A previous version of this story stated that Spotify requires age checks to access music in the UK. The service requires some users to complete age verification in order to access music videos tagged 18+ and messaging. We apologize for the error.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-year-age-verification-laws-came-for-the-open-internet-130000979.html?src=rss

Uber will let marketers target ads based on users’ trip and takeout data

Uber will begin offering customer data to marketers through a new insights platform called Uber Intelligence. The data will technically be anonymous, via the use of a platform called LiveRamp. This will "let advertisers securely combine their customer data with Uber's to help surface insights about their audiences, based on what they eat and where they travel."

Basically, it'll provide a broad view of local consumer trends based on collected data. Uber gives an example of a hotel brand using the technology to identify which restaurants or venues to partner with according to rideshare information.

Companies will also be able to use the Intelligence platform's insights to directly advertise to consumers. Business Insider reports it could be used to identify customers who are "heavy business travelers" and then plague them with ads in the app or in vehicles during their next trip to the airport. Fun times.

"That seamlessness is why we're so excited," Edwin Wong, global head of measurement at Uber Advertising, told Business Insider. Uber has stated that its ad business is already on track to generate $1.5 billion in revenue this year, and that's before implementing these changes.

As for Uber in totality, the company made $44 billion in 2024, which was a jump from $37 billion in 2023. It's also notorious for raising fares. Uber has raised prices for consumers by around 18 percent each year since 2018, which has outpaced inflation by up to four times in some markets.

Update, December 8, 7:25PM ET: This article previously stated that Uber was "selling customer data," but that was not accurate. Companies do not pay to access the Intellience platform. We regret the error. The article and its headline have been changed since publish to more accurately reflect the news.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/uber-will-let-marketers-target-ads-based-on-users-trip-and-takeout-data-171011841.html?src=rss

Judge puts a one-year limit on Google’s contracts for default search placement

A federal judge has expanded on the remedies decided for the Department of Justice's antitrust case against Google, ruling in favor of putting a one-year limit on the contracts that make Google's search and AI services the default on devices, Bloomberg reports. Judge Amit Mehta's ruling on Friday means Google will have to renegotiate these contacts every year, which would create a fairer playing field for its competitors. The new details come after Mehta ruled in September that Google would not have to sell off Chrome, as the DOJ proposed at the end of 2024. 

This all follows the ruling last fall that Google illegally maintained an internet search monopoly through actions including paying companies such as Apple to make its search engine the default on their devices and making exclusive deals around the distribution of services such as Search, Chrome and Gemini. Mehta's September ruling put an end to these exclusive agreements and stipulates that Google will have to share some of its search data with rivals to "narrow the scale gap" its actions have created. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/judge-puts-a-one-year-limit-on-googles-contracts-for-default-search-placement-215549614.html?src=rss

Chinese hackers reportedly targeting government entities using ‘Brickstorm’ malware

Hackers with links to China reportedly successfully infiltrated a number of unnamed government and tech entities using advanced malware. As reported by Reuters, cybersecurity agencies from the US and Canada confirmed the attack, which used a backdoor known as “Brickstorm” to target organizations using the VMware vSphere cloud computing platform.

As detailed in a report published by the Canadian Centre for Cyber Security on December 4, PRC state-sponsored hackers maintained "long-term persistent access" to an unnamed victim’s internal network. After compromising the affected platform, the cybercriminals were able to steal credentials, manipulate sensitive files and create "rogue, hidden VMs" (virtual machines), effectively seizing control unnoticed. The attack could have begun as far back as April 2024 and lasted until at least September of this year.

The malware analysis report published by the Canadian Cyber Centre, with assistance from The Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA), cites eight different Brickstorm malware samples. It is not clear exactly how many organizations in total were either targeted or successfully penetrated.

In an email to Reuters, a spokesperson for VMware vSphere owner Broadcom said it was aware of the alleged hack, and encouraged its customers to download up-to-date security patches whenever possible. In September, the Google Threat Intelligence Group published its own report on Brickstorm, in which it urged organizations to "reevaluate their threat model for appliances and conduct hunt exercises" against specified threat actors.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/chinese-hackers-reportedly-targeting-government-entities-using-brickstorm-malware-133501894.html?src=rss

Amazon reportedly considering ending ties with the US Postal Service

Amazon is reportedly considering discontinuing use of the US Postal Service and building out its own shipping network to rival it, according to The Washington Post. The e-commerce behemoth spends more than $6 billion a year on the public mail carrier, representing just shy of 8 percent of the service's total revenues. That's up from just shy of $4 billion in 2019, and Amazon continues to grow.

However, it sounds like that split might be due to a breakdown in negotiations between Amazon and the USPS rather than Amazon proactively pullings its business. Amazon provided Engadget with the following statement regarding the Post's reporting and its negotiations with the USPS: 

"The USPS is a longstanding and trusted partner and we remain committed to working together. We’ve continued to discuss ways to extend our partnership that would increase our spend with them, and we look forward to hearing more from them soon — with the goal of extending our relationship that started more than 30 years ago. We were surprised to hear they want to run an auction after nearly a year of negotiations, so we still have a lot to work through. Given the change of direction and the uncertainty it adds to our delivery network, we're evaluating all of our options that would ensure we can continue to deliver for our customers."

The auction Amazon is referring to would be a "reverse auction," according to the Post. The USPS would be offering its mailing capabilities to the highest bidder, essentially making Amazon and other high-volume shippers compete for USPS resources. This move would reportedly be a result of the breakdown in talks between Amazon and the USPS. 

Over the past decade, Amazon has invested heavily in shipping logistics, buying its own Boeing planes, debuting electric delivery vans and slowly building out a drone delivery network. Last year, Amazon handled over 6.3 billion parcels, a 7 percent increase over the previous year, according to the Pitney Bowes parcel shipping index. USPS, for its part, handled roughly 6.9 billion, just a 3 percent increase over 2023. That is to say that Amazon's shipping network can already handle over 90 percent of the volume of the US Postal Service (at least by sheer numbers).

The USPS has been in dire financial condition for some time, losing billions of dollars a year. Negotiations between Amazon and the public carrier have reportedly stalled, which, together with the agency's need to keep raising its prices, may create more urgency for the company to eliminate its reliance on the service altogether.

The Postal Service has struggled to modernize and adapt (its attempt to electrify the truck fleet was a bust) in a market where the likes of Amazon and Walmart are investing billions in delivering packages around the country at lightning speed. The ever-accelerating digitization of communication and heavy investment in privately owned shipping operations threatens the very existence of one of the country's greatest public goods.

Update, December 4, 2025, 2:24PM ET: This story has been updated with a statement from Amazon and more details about the "reverse auction" the USPS reportedly wants to conduct if it no longer works with Amazon.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/amazon-reportedly-considering-ending-ties-with-the-us-postal-service-144555021.html?src=rss

UK fines porn company £1 million for weak age checks

The UK has fined a porn operator called AVS Group £1 million ($1.33 million) for failing to have strong enough age checks, regulator Ofcom announced. The company which was also hit with an additional £50,000 fine for its failure to respond to information request and now has 72 hours to introduce effective age checks or face a further penalty of £1,000 a day. 

In July, the UK government announced it would start checking compliance by websites that publish or display pornographic content to implement a system for "highly effective age checks." Methods approved by Ofcom include credit card checks, photo ID matching and even estimating a user's age with a provided selfie. However, users have been circumventing the age checks via methods like using a VPN and providing a fake ChatGPT-generated photo ID. 

The fine is the third such penalty arising from the UK's Online Safety Act designed to protect children and adults from harmful content. In October, 4Chan was also hit with a £20,000 ($26,700) fine for failing to comply with the internet and telecommunications regulator's request for information under the same law.

The UK isn't the only region to have implemented age checks. Around half of US states now require it, as do France, Italy, Australia and China. Australia took things a step further by banning social media use by children under 16, including sites popular with young people like Twitch and YouTube.

Ofcom's safety director, Oliver Griffiths, said the crackdown on weak age verification for adult sites would continue. "The tide on online safety is beginning to turn for the better. But we need to see much more from tech companies next year and we’ll use our full powers if they fall short."

This article originally appeared on Engadget at https://www.engadget.com/general/uk-fines-porn-company-%C2%A31-million-for-weak-age-checks-130056578.html?src=rss

Google Discover is testing AI-generated headlines and they aren’t good

Artificial intelligence is showing up everywhere in Google's services these days, whether or not people want them and sometimes in places where they really don't make a lick of sense. The latest trial from Google appears to be giving articles the AI treatment in Google Discover. The Verge noticed that some articles were being displayed in Google Discover with AI-generated headlines different from the ones in the original posts. And to the surprise of absolutely no one, some of these headlines are misleading or flat-out wrong. 

For instance, one rewritten headline claimed "Steam Machine price revealed," but the Ars Technica article's actual headline was "Valve's Steam Machine looks like a console, but don’t expect it to be priced like one." No costs have been shared yet for the hardware, either in that post or elsewhere from Valve. In our own explorations, Engadget staff also found that Discover was providing original headlines accompanied by AI-generated summaries. In both cases, the content is tagged as "Generated with AI, which can make mistakes." But it sure would be nice if the company just didn't use AI at all in this situation and thus avoided the mistakes entirely.

The instances The Verge found were apparently "a small UI experiment for a subset of Discover users," Google rep Mallory Deleon told the publication. "We are testing a new design that changes the placement of existing headlines to make topic details easier to digest before they explore links from across the web." That sounds innocuous enough, but Google has a history of hostility towards online media its frequent role as middleman between publishers and readers. Web publishers have made multiple attempts over the years to get compensation from Google for displaying portions of their content, and in at least two instances, Google has responded by cutting out those sources from search results and later claiming that showing news doesn't do much for the bottom line of its ad business. 

For those of you who do in fact want more AI in your Google Search experience, you're in luck. AI Mode, the chatbot that's already been called outright "theft" by the News Media Alliance, is getting an even more symbiotic integration into the mobile search platform. Google Search's Vice President of Product Robby Stein posted yesterday on X that the company is testing having AI Mode accessible on the same screen as an AI Overview rather than the two services existing in separate tabs. 

This article originally appeared on Engadget at https://www.engadget.com/ai/google-discover-is-testing-ai-generated-headlines-and-they-arent-good-234700720.html?src=rss

Ireland is investigating TikTok and LinkedIn for possible DSA violations

Ireland's media regulator, Coimisiún na Meán, has announced investigations into both TikTok and LinkedIn for possible violations of the European Union's Digital Services Act, Reuters reports. The investigations are focused on both platforms' illegal content reporting features, which might not meet the requirements of the DSA.

The main issue appears to be how these platforms’ reporting tools are presented and implemented. Regulators found possible "deceptive interface designs" in the content reporting features they examined, which could make them less effective at actually weeding out illegal content. "The reporting mechanisms were liable to confuse or deceive people into believing that they were reporting content as illegal content, as opposed to content in violation of the provider’s Terms and Conditions," the regulator wrote in a press release announcing its investigation.

“At the core of the DSA is the right of people to report content that they suspect to be illegal, and the requirement on providers to have reporting mechanisms, that are easy to access and user-friendly, to report content considered to be illegal, “ John Evans, Coimisiún na Meán's DSA Commissioner, said in the press release. "Providers are also obliged to not design, organize or operate their interfaces in a way which could deceive or manipulate people, or which materially distorts or impairs the ability of people to make informed decisions."

Evans goes on to note that Coimisiún na Meán has already gotten other providers to make "significant changes to their reporting mechanisms for illegal content," likely due to the threat of financial penalties. Many tech companies have headquarters in Ireland, and if a platform provider is found to violate the DSA, Irish regulators can fine them up to six percent of their revenue in response.

Ireland's Data Protection Commission is already conducting a separate investigation into the social media platform X for allegedly training its Grok AI assistant on posts from users. Doing so would violate the General Data Protection Regulation or GDPR, and allow Ireland to take a four percent cut of the company's global revenue.

This article originally appeared on Engadget at https://www.engadget.com/social-media/ireland-is-investigating-tiktok-and-linkedin-for-possible-dsa-violations-194519622.html?src=rss