LONDON: Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyberattacks, according to research.
A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.
Already a subscriber? Log in
Play, subscribe and stand a chance to win prizes worth over RM39,000! T&C applies.
Cancel anytime. Ad-free. Unlimited access with perks.
