Bots like ChatGPT ‘can be tricked into making code for cyberattacks’


A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems. — AFP

LONDON: Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyberattacks, according to research.

A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.

Generative AI tools such as ChatGPT can create content based on user commands or prompts and are expected to have a substantial impact on daily life as they become more widely used in industry, education and healthcare.

But the researchers have warned that vulnerabilities exist, and said their research found they were able to trick the chatbots into helping steal sensitive personal information, tamper with or destroy databases, or bring down services using denial-of-service attacks.

In all, the university study found vulnerabilities in six commercial AI tools – of which ChatGPT was the most well-known.

On Chinese platform Baidu-Unit, the scientists were able to use malicious code to obtain confidential Baidu server configurations and tampered with one server node.

In response, the research has been recognised by Baidu, which addressed and fixed the reported vulnerabilities and financially rewarded the scientists, the university said.

Xutan Peng, a PhD student at the University of Sheffield, who co-led the research, said: “In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood.

"At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.”

The researchers also warned that people using AI to learn programming languages was a danger, as they could inadvertently create damaging code.

“The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are,” Peng said.

“For example, a nurse could ask ChatGPT to write an (programming language) SQL command so that they can interact with a database, such as one that stores clinical records.

“As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.” – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Samsung Electronics operating profit rises slightly in Q1
Toyota, Waymo explore collaboration to speed up self-driving tech
Electronic Arts lays off hundreds, cancels 'Titanfall' game, Bloomberg News reports
Server maker Super Micro fans AI spending worries with cuts to revenue, profit estimates
Logitech reports weak Q4 quarterly profit
Snap shelves quarterly forecast as economic uncertainty risks ad budgets
Tesla hiring over 1,000 workers to ramp up Semi truck production, Business Insider reports
Trump called Bezos to complain about Amazon report, media reports say
ASM International expects FY sales growth of 10%-20% despite tariff uncertainty
Britain's M&S says cyber attack has hit food availability in some stores

Others Also Read