Bots like ChatGPT ‘can be tricked into making code for cyberattacks’


A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems. — AFP

LONDON: Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyberattacks, according to research.

A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.

The Star Festive Promo: Get 35% OFF Digital Access

Monthly Plan

RM 13.90/month

Best Value

Annual Plan

RM 12.33/month

RM 8.02/month

Billed as RM 96.20 for the 1st year, RM 148 thereafter.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Anthropic buys Super Bowl ads to slap OpenAI for selling ads in ChatGPT
Chatbot Chucky: Parents told to keep kids away from talking AI dolls
South Korean crypto firm accidentally sends $44 billion in bitcoins to users
Opinion: Chinese AI videos used to look fake. Now they look like money
Anthropic mocks ChatGPT ads in Super Bowl spot, vows Claude will stay ad-free
Tesla 2.0: What customers think of Model S demise, Optimus robot rise
Vista Equity Partners and Intel to lead investment in AI chip startup SambaNova, sources say
Apple plans to allow external voice-controlled AI chatbots in CarPlay, Bloomberg News reports
Goldman Sachs teams up with Anthropic to automate banking tasks with AI agents, CNBC reports
US Justice Department casts wide net on Netflix's business practices in merger probe, WSJ reports

Others Also Read