A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems. — AFP
LONDON: Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyberattacks, according to research.
A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.
