What OpenAI did when ChatGPT users lost touch with reality


In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth? — Julia Dufosse/The New York Times

It sounds like science fiction: A company turns a dial on a product used by hundreds of millions of people and inadvertently destabilises some of their minds. But that is essentially what happened at OpenAI this year.

One of the first signs came in March. CEO Sam Altman and other company leaders got an influx of puzzling emails from people who were having incredible conversations with ChatGPT. These people said the company’s AI chatbot understood them as no person ever had and was shedding light on mysteries of the universe.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show
Netflix’s $72 billion Warner Bros deal faces skepticism over YouTube rivalry claim
Pakistan to allow Binance to explore 'tokenisation' of up to $2 billion of assets
Analysis-Musk's Mars mission adds risk to red-hot SpaceX IPO
Analysis-Oracle-Broadcom one-two punch hits AI trade, but investor optimism persists

Others Also Read