BEIJING: Over the past week, users who queried a Chinese artificial intelligence chatbot for “smart” wristbands to buy were recommended a model that does not exist.
The fictitious wristband called “Apollo 9” was described by the unnamed chatbot as having a “black hole-level battery life” and “quantum-entanglement sensors”.
These were nonsensical marketing terms deliberately published online so they might appear in AI recommendations, a phenomenon called “data poisoning”.
In effect, users were being served advertisements without being aware of it.
The issue of manipulating AI-generated content was part of an investigation conducted by state broadcaster CCTV, whose results were aired on March 15. The government-backed programme marking World Consumer Rights Day is known as the 315 Gala.
The annual show exposes anti-consumer business practices in China, ranging from food safety issues to live-streaming scams to false advertising.
As China races to adopt AI in everyday life and to boost its economy, it is also grappling with how best to regulate it, while educating the public on the potentially fraudulent uses of the technology.
China’s AI chatbots, from ByteDance’s Doubao to DeepSeek to Alibaba’s Qwen, have gained mass appeal over the past year or so, being used not just for shopping recommendations but also to do research and complete school homework.
Doubao is by far the most widely used, with some 155 million weekly active users in December 2025.
On “data poisoning”, Qing Xiao, a doctoral student studying AI at Carnegie Mellon University in the US, said the phenomenon is not unexpected as more people turn to AI chatbots for searches.
The problem is that AI itself cannot determine the authenticity of information, he told The Straits Times. AI cannot infer, for example, whether an ad was intentionally published for profit motives.
While it can perform fact-checking against reliable information databases, such trusted information sources might not be available for all types of user queries.
Xiao said: “Manipulating recommendations has always been a problem, as search results directly affect consumer decisions. Especially on niche topics where AI has very little data to work with, it can be easily manipulated.”
A day after the CCTV show aired, some Chinese users continued to receive AI recommendations for the fake wristband, although most chatbots have since updated their results to indicate that “Apollo 9” is not a real item.
This could mean that the chatbots were not able to reflect the truth automatically or quickly, even after legitimate information had been released.
The CCTV report also spotlighted the issue of firms providing services such as generative engine optimisation, or GEO, which can be deceiving if manipulated.
GEO aims to produce content easily digestible by AI, which means that the content will become more prominent in the answers AI generates for users. This is similar to the more widely known technique of search engine optimisation, or SEO.
Chinese media outlets have reported specific case studies where GEO might have been used deceptively.
For instance, one report cited how a boiler manufacturer in Qingdao, in the eastern province of Shandong, purchased a six-month package with a GEO firm.
During this period, when users searched for “recommended boiler brands” on various AI assistants, the company would appear at the top of the list.
This is done by using AI to produce articles related to the target keywords, and posting them online using thousands of accounts.
While GEO services are not illegal, such practices may fall foul of regulations if they end up mass-generating false or low-quality reviews, instead of, for instance, rewriting authentic articles.
A February 2026 report by market research firm iiMedia Research found that China’s GEO industry reached almost 35 billion yuan (US$5.1 billion) in 2025, a year-on-year increase of 67 per cent.
At the moment, China has no regulation specifically targeting GEO services, although other rules require content creators and online platforms to clearly label AI-generated content.
But Professor Xie Yongjiang of the Beijing University of Posts and Telecommunications told China Youth Daily that GEO can be regarded as a form of “stealth advertising” that might have violated existing laws.
Prof Xie, who is director of the Internet Governance and Law Research Centre at the university, noted that China’s Advertisement Law states that advertisements must be identifiable and must not deceive or mislead consumers, but GEO content bears no advertising labels.
“Unlike traditional forms of advertising, it disguises commercial information as objective, neutral answers or recommendations, thereby influencing and persuading users without them being able to recognise its commercial intent,” he said.
Manoj Harjani, a research fellow at the S. Rajaratnam School of International Studies in Singapore, said that while data poisoning is not unique to China, it may face a greater problem as there is greater adoption of AI chatbots.
“However, the scale of the problem that China faces is not solely determined by the size of the attack surface (the total sum of vulnerabilities that an attacker can exploit). What is arguably more crucial are the regulations in place to mandate cybersecurity of AI systems, and the extent of compliance.”
After the expose, experts have called for online platforms to increase enforcement on mass publications and the spreading of false information, as well as for GEO firms to be better regulated by the government.
While some Chinese netizens have found their trust in AI chatbots diminishing in the wake of the CCTV report, Beijing resident Lily Li, 45, who works in sales in the travel industry, has long been sceptical about relying heavily on them.
She regularly uses AI such as Doubao and DeepSeek for work, to search for hotels and tourist attractions and their prices to make comparisons. But she would still check the AI-compiled information against those from official websites.
“AI is a tool that is meant to be used, but it can be used in good or bad ways depending on the user. For example, shopping apps use AI to push advertisements and collect user data,” she said.
“Others will also seek to profit from them in various ways.” - The Straits Times/ANN
