ChatGPT chats are not confidential, so don't tell it your secrets


As more and more people rely on ChatGPT for help with everyday issues, cybersecurity experts are warning users to think twice before sharing information you wouldn't share with a stranger. — Photo: Philipp Brandstädter/dpa

BERLIN: Intimate health issues are something you might rather tell an AI about, instead of waiting at a nearby practice to awkwardly describe it to a doctor. After all, ChatGPT has an answer for virtually any question, right?

And yet: Apart from the fact that ChatGPT is known to provide false answers (some of which can be entirely fabricated), AI chat services should not be considered the best keepers of secrets.

As more and more people rely on ChatGPT and Google's AI, Bard, for help with everyday issues, cybersecurity experts are warning users to think twice before sharing information you wouldn't share with a stranger.

The Germany-based Hasso Plattner Institute (HPI) specializing in IT is warning against the "disclosing sensitive data," as much of the information you share with an AI is being used to help train the systems that power it, and thereby make it more smart.

Any user who shares confidential information with an AI is at risk of giving up their privacy, and a look at ChatGPT policy shows this.

"As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements," ChatGPT developer OpenAI says, confirming that employees can see what you write. The company does have a data deletion feature, however.

It's not only personal secrets that are best kept away from an AI, but also company information, and according to the HPI, employees should be wary about giving an AI access to any company data.

Anyone who uploads their company's internal employee data to pep up a presentation or gets ChatGPT to help make sense of the company's financial figures could even be passing on trade secrets.

Cybersecurity research firm Cyberhaven has warned that many companies are leaking sensitive data "hundreds of times each week" as a result of employees oversharing with ChatGPT.

A temporary glitch in ChatGPT had allowed some users to "see the titles of other users' conversation history," OpenAI chief executive Sam Altman confirmed earlier in March. "We feel awful about this," Altman said on Twitter, confirming the bug had been fixed. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Artificial intelligence offers an opportunity to improve EV batteries
Apple still leads high-end smartphone sales in China, but Huawei and Honor are catching up
Brave China ‘cancer warrior’ dies two days after 25th birthday, final wish to find brother a girlfriend left unfulfilled, leaves netizens devastated
Meta shares plunge 16% in Frankfurt after AI spending, revenue forecast
What next for TikTok in the US?
Atos says it will need more cash than expected
TikTok to fight US ban law in courts
STMicro cuts FY revenue outlook as slowing car market bites
Tesla driver in Seattle-area crash that killed motorcyclist told police he was using Autopilot
Spurred by teen girls, US states move to ban deepfake nudes

Others Also Read