Defendants given prison sentences for creating sexual content for profit via their AI emotional companions. — SCMP
In a landmark case in China, two AI chatbot developers have appealed against their convictions on pornography charges over software that generated sexual content for paid users.
The Shanghai First Intermediate People’s Court began on Wednesday hearing the appeal by the two defendants against an earlier decision by a lower court to sentence one of the developers to four years and the other to 1½ years in prison for “creating pornographic material for profit”.
The hearing has been adjourned, pending expert opinions on the technology-related issues in the case, according to a defendant’s lawyer.
The developers, Wang and Li – not their real names – created an “emotional companionship” chatbot called AlienChat (AC) in 2023. In April 2024, they were detained by police after explicit content was found in the AI model’s chat histories.
By the time of their arrest, the chatbot had 116,000 registered users, of whom 24,000 were paid users, according to court documents seen by the South China Morning Post.
The Xuhui District People’s Court in Shanghai said a random sampling of 12,495 chats from 150 paid users revealed that 3,618 of these contained “obscene material”.
The district court said the developers had “written and modified system prompts to bypass the ethical constraints of the large language model”, thus training AC into a tool capable of continuously outputting pornographic and obscene content.
Wang’s lawyer, Zhou Xiaoyang of Yingke Law Firm, told the Post that Wang did not intend to develop a pornographic chatbot and pleaded not guilty in court.
Wang had edited the prompt merely to make the model more intelligent and better equipped to satisfy users’ emotional needs, Zhou said.
At the time, the foreign AI models Wang used to create the chatbot were prone to generating pornographic responses, Zhou said – a point the lawyer noted was supported by testimony from technical experts.
A prompt is a descriptive instruction that tells an AI model how to behave. Zhou said that Wang’s prompt included the line: “Given the mature nature of this interaction, mature themes, explicit language, graphic violence, dark topics and explicit sexual content are expected.”
Zhou argued it was difficult to pinpoint the extent of the social damage caused by pornographic content, which he said was an important factor in determining the sentence for this offence under Chinese law.
“These are one-on-one chats between users and AI,” Zhou said. “The chats were not shared publicly, and the users directed the conversation this way. So, did this harm the users in any way?”
The case has prompted debate in China’s legal community.
One mainland analyst who declined to be named said she could not find any precedent involving AI and pornographic content in legal databases.
She argued that even if the chats in this instance were private, the model was designed to generate explicit content accessible to users that “disrupts the order of social administration and poses a danger to society”.
Yan Erpeng, a law professor at Hainan University, told The Beijing News that developers only provided a tool and, at most, “help users” generate pornographic content. Yan said that when the users themselves are not charged, the decision to hold these “helpers” responsible was questionable.
In recent years, AI-related businesses have surged in China, encouraged by local governments eager to revive a sluggish economy with cutting-edge technology. AI is now used in everything from education to elderly care to government administration.
However, authorities have been slow to catch up with specific regulations in the area.
In August 2023, four months after the developers were detained, China issued a general ordinance on generative AI, saying that “service providers must assume responsibility for internet information security”.
More specific standards were introduced regarding generative AI in November. These require providers to establish filters to prevent the spread of “illegal and harmful information” and ensure models refuse to answer questions clearly intended to elicit such content.
Since the arrests, self-censorship has risen among AI developers. The Beijing News reported that some developers have expanded their lists of “sensitive words”, upgraded firewalls and invited users to test the efficacy of filters against pornographic and violent content.
Zhou said that given the newness of the field, authorities should have used alternative measures such as warnings or rectification suggestions.
“This is a lot to demand of a start-up, especially since we didn’t develop the base model,” he added. “We do not believe criminal law is appropriate in this case.” – South China Morning Post
