
Several cybersecurity experts stressed that any malicious code provided by the model is only as good as the user and the questions asked of it. — Image by DCStudio on Freepik
Ever since OpenAI’s viral chatbot was unveiled late last year, detractors have lined up to flag potential misuse of ChatGPT by email scammers, bots, stalkers and hackers.
The latest warning is particularly eye-catching: It comes from OpenAI itself. Two of its policy researchers were among the six authors of a new report that investigates the threat of AI-enabled influence operations. (One of them has since left OpenAI.)
Unlock 30% Savings on Ad-Free Access Now!
