Bots like ChatGPT ‘can be tricked into making code for cyberattacks’


A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems. — AFP

LONDON: Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyberattacks, according to research.

A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.

Play, subscribe and stand a chance to win prizes worth over RM39,000! T&C applies.

Monthly Plan

RM 13.90/month

RM 11.12/month

Billed as RM 11.12 for the 1st month, RM 13.90 thereafter.

Best Value

Annual Plan

RM 12.33/month

RM 9.87/month

Billed as RM 118.40 for the 1st year, RM 148 thereafter.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Dutch court orders xAI, Grok not to create, distribute nonconsensual sex images in Netherlands
Judge dismisses lawsuit by Musk's X Corp accusing advertisers of illegal boycott
European Payments Initiative CEO says Trump fears are boosting its appeal
Apple adds Bosch, Cirrus Logic, others to US manufacturing program, to invest $400 million
Crypto for a home? Coinbase brings token-backed down payments to housing market
Snapchat hit with EU probe into alleged failure to prevent child grooming, illegal goods sales
Pornhub, Stripchat, XNXX, XVideos charged with breaching EU tech rules, risk fines
UK sanctions Cambodia-based scam centre and crypto platform
OpenAI indefinitely pauses plans to release erotic chatbot, FT says
US jury verdicts against Meta, Google tee up fight over tech liability shield

Others Also Read