A former OpenAI researcher just issued a warning about ChatGPT ads


Since its launch, ChatGPT has collected user information on an unprecedented scale, largely because people assumed their conversations had no ulterior purpose. — Photo by Jonathan Kemper on Unsplash

OpenAI rolled out advertisements on ChatGPT this week, and some observers are already drawing uneasy parallels to the early days of Facebook. In a New York Times opinion piece, Zoe Hitzig, a former OpenAI researcher, warned that the company’s new direction could create serious risks for users.

Hitzig spent two years at OpenAI helping shape its models, influencing how they were built and priced, and contributing to early safety policies before formal standards existed. She joined the company, she wrote, with a mission to “help the people building AI get ahead of the problems it would create.”

But the arrival of ads, she said, made her realise OpenAI had stopped asking the very questions she was brought on to address.

For Hitzig, the issue isn’t simply that ChatGPT now includes advertising. She acknowledged that AI systems are enormously expensive to develop and maintain, and that ads are an obvious source of revenue. The deeper problem, she argued, lies in the strategy behind them.

Since its launch, ChatGPT has collected user information on an unprecedented scale, largely because people assumed their conversations had no ulterior purpose. Users have confided intimate details about medical concerns, relationships, finances, and religious beliefs.

“Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent,” Hitzig wrote.

OpenAI has said it will introduce ads with safeguards: They will be clearly labelled, appear at the bottom of responses, and will not influence the chatbot’s answers. “I believe the first iteration of ads will probably follow those principles,” Hitzig said.

She is far less confident about what comes next. In her view, OpenAI is “building an economic engine that creates strong incentives to override its own rules.” She compared the moment to Facebook’s early years, when the company promised users control over their data and even offered votes on policy changes. Those commitments, she noted, ultimately vanished as an ad-driven business model pushed the company to prioritise engagement above all else.

Now, she fears OpenAI could follow a similar path, if it hasn’t already. While the company says it does not optimise for engagement purely to sell more ads, reports suggest it already optimises for daily active users, potentially by making the chatbot more flattering or agreeable.

That dynamic, she argued, can be harmful. In extreme cases, she wrote, psychiatrists have described instances of “chatbot psychosis,” in which AI systems reinforced users’ suicidal ideation.

Hitzig rejected the idea that AI funding must be framed as a binary choice: restrict access to paying customers or rely on advertising that risks exploiting users’ most personal vulnerabilities. Instead, she argued, tech companies can pursue alternatives that keep tools widely available without creating incentives to surveil and manipulate users.

One option would be to use profits from certain services to subsidise others. Another would allow advertising, albeit paired with real governance, binding agreements, and independent oversight over how personal data is handled. A third would place user data under independent control through a trust or cooperative legally obligated to act in users’ interests.

“None of these options are easy,” Hitzig wrote. “But we still have time to work them out to avoid the two outcomes I fear most: a technology that manipulates the people who use it at no cost, and one that exclusively benefits the few who can afford to use it.” – Inc./Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Nevada sues to block Kalshi from operating prediction market in state
Indian data center firm Yotta to build $2 billion AI hub with Nvidia's Blackwell chips
Tesla avoids suspension by California regulator after corrective marketing changes
YouTube restores service after brief streaming disruption
Meta plans to add facial recognition technology to its smart glasses
AI promised to save time. Researchers find it’s doing the opposite
California builds AI oversight unit and presses on xAI investigation
Could excess screen time for kids have long term effect on brain?
Stripe's crypto unit Bridge obtains initial approval to establish a trust bank
Waymo defends use of remote assistance workers in US robotaxi operations

Others Also Read