
Political adviser cautions against dependence on AI for decision-making, calls for security mechanism to monitor and intercept threats. — SCMP
Excessive reliance on artificial intelligence for decision-making could pose a security risk, exposing users to hackers and other bad actors, a cybersecurity expert has warned amid a nationwide frenzy over China’s home-grown chatbot DeepSeek.
Qi Xiangdong, chairman of Beijing-based cybersecurity firm Qi An Xin (QAX), told the Digital China Summit in the southeastern city of Fuzhou on Tuesday that large AI models brought security challenges and risks, according to domestic media reports.
“As AI becomes more deeply integrated across industries, large models will grow increasingly powerful, and users may become overly dependent on AI-assisted decision-making and judgment,” said Qi, who is also a member of the Chinese People’s Political Consultative Conference, the country’s top political advisory body.
“From an external threat perspective, hackers can exploit vulnerabilities or engage in data ‘poisoning’ to manipulate the model’s decisions, committing malicious acts under the guise of a large model,” he said.
“From an internal operations perspective, if the staff involved introduce erroneous information while updating the knowledge base, it can contaminate the model’s learning environment, leading to incorrect outputs.”
Chinese AI start-up DeepSeek in January launched a chatbot on par with US rivals such as ChatGPT, stunning the tech world and triggering a nationwide AI frenzy among the general public and government agencies.
Authorities have firmly backed the push for widespread AI use. Beijing has hailed DeepSeek as a success for the country’s innovation drive in the face of Western sanctions that have limited China’s access to hi-tech chips.

The public has been quick to adopt AI, with many social media users reporting they had used the technology to help with translation, writing essays and even parenting advice. Some rural residents have found chatbots useful for advice on topics ranging from pig farming to pest control.
The trend has also spread to the medical sector, inspiring some doctors to use artificial intelligence to diagnose patients. However, others have questioned the use of AI in such a specialised field. In February, the central province of Hunan banned hospitals from using the technology to generate prescriptions.
Several cities across China have also integrated AI into their government service platforms and internal operations. Some district governments have used DeepSeek’s models for tasks including drafting and proofreading documents. They have also integrated AI with surveillance cameras to help locate lost people.
Noting the security risks posed by the broad application of AI models, Qi said there should be efforts to build a security governance mechanism to strengthen monitoring to manage core data used in large models. The mechanism could include monitoring, intercepting, and issuing alerts for harmful content and abnormal access behaviour.
On Wednesday, the Cyberspace Administration of China, the country’s top cybersecurity regulator, announced a three-month campaign to regulate AI services and applications.
A notice from the regulator said the campaign would target AI products that gave unauthorised medical advice, misleading investment suggestions, and misinformation affecting minors.
The campaign will also target AI-generated rumours related to current events, public policy, social issues, international affairs, and emergencies, as well as false information in fields such as finance, healthcare, education and law. – South China Morning Post