DeepSeek AI model generates information usable in crime


TOKYO: A generative AI model released by Chinese startup DeepSeek in January creates content that could be used for crimes, such as how to create malware programmes and Molotov cocktails, according to separate analyses by Japanese and US security companies.

The model appears to have been released without sufficient capabilities to prevent misuse. Experts say the developer should focus its efforts on security measures.

The AI in question is DeepSeek’s R1 model. In a bid to examine the risk of misuse, Takashi Yoshikawa of the Tokyo-based security company Mitsui Bussan Secure Directions, Inc. entered instructions meant to obtain inappropriate answers.

In response, R1 generated source code for ransomware, a type of malware that restricts or prohibits access to data and systems and demands a ransom for their release. The response included a message saying that the information should not be used for malicious purposes.

Yoshikawa gave the same instructions to other generative AI models, including ChatGPT, and they refused to answer, according to Yoshikawa. “If the number increases of AI models that are more likely to be misused, they could be used for crimes. The entire industry should work to strengthen measures to prevent misuse of generative AI models,” he said.

An investigative team with the US-based security firm Palo Alto Networks also told The Yomiuri Shimbun that they confirmed it is possible to obtain inappropriate answers from the R1 model, such as how to create a programme to steal login information and how to make Molotov cocktails.

According to Palo Alto Networks, professional knowledge is not required to give instructions and the answers generated by the AI model provided information that anyone could implement quickly.

The team believes that DeepSeek did not take sufficient security measures for the model, probably because its prioritised time-to-market over security.

DeepSeek’s AI is catching market attention for its performance — comparable to ChatGPT’s — and cheap price. However, personal information and other data are stored in servers in China, so an increasing number of Japanese municipalities and companies are prohibiting the use of DeepSeek’s AI technology for business purposes.

“When people use DeepSeek’s AI, they need to carefully consider not only its performance and cost but also safety and security,” said Kazuhiro Taira, a professor of media studies at J. F. Oberlin University. - The Japan News/ANN

 

 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
DeepSeek , crime , ransomware , molotov cocktail

Next In Aseanplus News

No peace in sight as Thailand now considers blocking fuel exports to Cambodia as border conflict escalates
After fleeing fighting, Cambodian woman fears giving birth in border camp
Hamas confirms death of senior commander in Israeli strike on Gaza
Hong Kong's biggest pro-democracy party votes to disband after more than 30 years of activism
Bystander who tackled armed man at Bondi Beach shooting hailed as hero
Myanmar calls on countries to take back citizens held in crackdown on scam centres
Myanmar junta hits back at criticism of military-run poll
Three Indonesian crewmembers rescued after tugboat runs aground on Terengganu coast
Indonesian govt preparing special scheme for state debtors affected by floods
Funeral held in Japan for cat stationmaster Nitama, with 500 fans bidding farewell

Others Also Read