BEIJING: A Chinese short drama series suspected of using artificial intelligence to obtain people’s facial data has sparked public outcry in China in recent days, prompting regulators and legal experts to stress that technological advancement must not infringe on personal rights.
In late March, several bloggers – including enthusiasts of traditional Chinese attire and models – accused the popular AI-generated series Peach Blossom Hairpin of using technology to replicate their facial features, outfits and make-up without consent to create characters that were widely distributed on short-video platforms.
By then, the series had already garnered more than 40 million views on Hongguo, a micro-drama platform, and some of the alleged victims said they were preparing to take legal action.
On April 3, Hongguo said on its official WeChat account that the series had been removed and that no new content would be uploaded for 15 days because its creator had failed to provide sufficient proof of compliance with regulations governing the use of facial imagery.
The platform said that adherence to legal and regulatory standards is a non-negotiable baseline but noted that short dramas, as a new form of creative product, present major challenges for content review, particularly with the rise of AI tools.
It pledged to strengthen content review processes, upgrade verification technologies and improve authorisation procedures to foster a more regulated environment for content creation and distribution.
Although the producer has not confirmed whether bloggers’ photos were used as templates for AI generation, legal experts said such actions could still constitute infringement.
Zhao Zhanling, a lawyer at Beijing Javy Law Firm, said that under the civil code and in legal practice, if an AI-generated face leads the public to associate it with a specific individual, it may constitute infringement.
“Copying a person’s image and processing it with AI is a typical example of using information technology to violate someone’s portrait rights,” Zhao said.
As AI technology becomes more prevalent in the film and television industry, similar cases of AI-powered face and voice swopping have become increasingly frequent.
In March, the Beijing Internet Court disclosed a case in which an actress’ images were misused by two companies using AI face-swopping technology in a short drama.
The court ruled in favour of the actress, ordering the defendants to issue a public apology and compensate her for financial losses.
In another case, the court supported a voice-over artist, ruling that using AI to imitate someone’s voice without permission constitutes infringement of voice rights.
“The advancement of AI has facilitated creative production but has also been exploited for infringing activities,” said Ma Xiangxiang, a lawyer at AnJie Broad Law Firm.
She noted that regulators in China have begun addressing the illegal use of such technology, particularly in AI-driven face swopping in short videos.
On April 2, the performers’ committee of the China Federation of Radio and Television Associations issued a statement condemning the unauthorised use of actors’ images and voices through AI face swopping, voice cloning and unauthorised editing or remixing.
The committee said that any content linked to specific actors – whether through AI-generated lookalikes, imitated voices, face-swopped dramas, commercial use, virtual replicas or derivative works – does not avoid liability, regardless of labelling.
On April 5, the studio of Yi Yangqianxi said that AI-generated dramas using the actor’s likeness without permission had been circulating online. He has not appeared in such productions nor authorised any third party to use his image for AI synthesis, the studio said, adding that it had engaged lawyers.
Zhao said pursuing legal remedies is important but noted that it is more difficult for ordinary individuals to identify infringement, as AI-generated content often draws on large datasets.
“Producers frequently claim that any resemblance is purely coincidental, making it harder to prove recognisability,” he said. “Additionally, the costs of legal action – including evidence collection, notarisation and litigation – can be prohibitively high.”
Zhao advised individuals who discover unauthorised AI-generated content using their likeness to immediately record or take a screenshot of the material and preserve evidence, preferably through blockchain methods.
He added that filing complaints with hosting platforms is a faster and more affordable way to seek remedies.
Ma cited the civil code, which requires platforms to take necessary measures – such as removal, blocking or disconnection of links – once they are notified of infringing content.
Upon receiving such notice, platforms must promptly forward it to the alleged infringer and take appropriate action based on preliminary evidence and the nature of the service, she said.
She also called for a stronger legal framework to further regulate AI applications, thereby ensuring data security, strengthening personal information and child protection, and supporting the healthy development of the digital economy. - China Daily/ANN
