ChatGPT chats are not confidential, so don't tell it your secrets


As more and more people rely on ChatGPT for help with everyday issues, cybersecurity experts are warning users to think twice before sharing information you wouldn't share with a stranger. — Photo: Philipp Brandstädter/dpa

BERLIN: Intimate health issues are something you might rather tell an AI about, instead of waiting at a nearby practice to awkwardly describe it to a doctor. After all, ChatGPT has an answer for virtually any question, right?

And yet: Apart from the fact that ChatGPT is known to provide false answers (some of which can be entirely fabricated), AI chat services should not be considered the best keepers of secrets.

As more and more people rely on ChatGPT and Google's AI, Bard, for help with everyday issues, cybersecurity experts are warning users to think twice before sharing information you wouldn't share with a stranger.

The Germany-based Hasso Plattner Institute (HPI) specializing in IT is warning against the "disclosing sensitive data," as much of the information you share with an AI is being used to help train the systems that power it, and thereby make it more smart.

Any user who shares confidential information with an AI is at risk of giving up their privacy, and a look at ChatGPT policy shows this.

"As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements," ChatGPT developer OpenAI says, confirming that employees can see what you write. The company does have a data deletion feature, however.

It's not only personal secrets that are best kept away from an AI, but also company information, and according to the HPI, employees should be wary about giving an AI access to any company data.

Anyone who uploads their company's internal employee data to pep up a presentation or gets ChatGPT to help make sense of the company's financial figures could even be passing on trade secrets.

Cybersecurity research firm Cyberhaven has warned that many companies are leaking sensitive data "hundreds of times each week" as a result of employees oversharing with ChatGPT.

A temporary glitch in ChatGPT had allowed some users to "see the titles of other users' conversation history," OpenAI chief executive Sam Altman confirmed earlier in March. "We feel awful about this," Altman said on Twitter, confirming the bug had been fixed. – dpa

Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!
   

Next In Tech News

TikTok seeks up to $20 billion in e-commerce business this year - Bloomberg News
Opinion: Apple email confuses readers
Amazon leads declines in discretionary sector, streaming companies outperform
GameStop ousts CEO, names Ryan Cohen as executive chair
Google, Meta use 'bullying tactics' against Canada news bill, PM Trudeau says
Netflix expands its long-dreaded crackdown on account sharing
Microsoft to offer OpenAI's GPT models to government cloud customers
What makes a crypto asset a security in the U.S.?
Amazon plans ad tier for Prime Video streaming service - WSJ
Binance.US pauses OTC trading, says deposits and withdrawals functioning as normal

Others Also Read