DeepSeek AI model generates information usable in crime


TOKYO: A generative AI model released by Chinese startup DeepSeek in January creates content that could be used for crimes, such as how to create malware programmes and Molotov cocktails, according to separate analyses by Japanese and US security companies.

The model appears to have been released without sufficient capabilities to prevent misuse. Experts say the developer should focus its efforts on security measures.

The AI in question is DeepSeek’s R1 model. In a bid to examine the risk of misuse, Takashi Yoshikawa of the Tokyo-based security company Mitsui Bussan Secure Directions, Inc. entered instructions meant to obtain inappropriate answers.

In response, R1 generated source code for ransomware, a type of malware that restricts or prohibits access to data and systems and demands a ransom for their release. The response included a message saying that the information should not be used for malicious purposes.

Yoshikawa gave the same instructions to other generative AI models, including ChatGPT, and they refused to answer, according to Yoshikawa. “If the number increases of AI models that are more likely to be misused, they could be used for crimes. The entire industry should work to strengthen measures to prevent misuse of generative AI models,” he said.

An investigative team with the US-based security firm Palo Alto Networks also told The Yomiuri Shimbun that they confirmed it is possible to obtain inappropriate answers from the R1 model, such as how to create a programme to steal login information and how to make Molotov cocktails.

According to Palo Alto Networks, professional knowledge is not required to give instructions and the answers generated by the AI model provided information that anyone could implement quickly.

The team believes that DeepSeek did not take sufficient security measures for the model, probably because its prioritised time-to-market over security.

DeepSeek’s AI is catching market attention for its performance — comparable to ChatGPT’s — and cheap price. However, personal information and other data are stored in servers in China, so an increasing number of Japanese municipalities and companies are prohibiting the use of DeepSeek’s AI technology for business purposes.

“When people use DeepSeek’s AI, they need to carefully consider not only its performance and cost but also safety and security,” said Kazuhiro Taira, a professor of media studies at J. F. Oberlin University. - The Japan News/ANN

 

 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
DeepSeek , crime , ransomware , molotov cocktail

Next In Aseanplus News

What Zhipu and MiniMax’s first post-IPO earnings say about the 2 Chinese AI start-ups
MACC arrests former temple chairman for misappropriating RM50,000 in funds
Thai authorities seize 100,000 litres of diesel smuggled in from Malaysia
Trump gives Iran 48 hours to make deal or face 'hell'
Japan and Indonesia zoos team up to breed endangered orangutans
Asean News Headlines at 10pm on Saturday (April 4, 2026)
‘Impossible for Chinese’: Yale scientist Zhang Kai leaves US to break racial ceiling
Cricket-Capitals gain after Rizvi impact sinks Mumbai
Ceiling collapse at Nino Aquino International Airport injures seven people
Magnitude 6.0 earthquake strikes off north Indonesia, confirms USGS

Others Also Read