A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems. — AFP
LONDON: Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyberattacks, according to research.
A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.
Already a subscriber? Log in
Save 30% OFF The Star Digital Access
Cancel anytime. Ad-free. Unlimited access with perks.
