A Chinese app's software based on third-party artificial intelligence model has come under scrutiny for facilitating sexually explicit conversations, prompting legal experts to urge clearer safety protocols and ethical guidelines for AI service providers.
Experts say developers must strictly comply with laws and regulations during algorithm research and large model updates, stressing that legal, ethical and safety boundaries must not be crossed in pursuit of user growth.
The concerns follow the successful prosecution of the developer and operator of Alien Chat, an app's software integrated with an overseas AI model and created by a Shanghai-based company. A Shanghai court in September sentenced the two individuals to four years and 18 months in prison, respectively, for profiting from the production of obscene and pornographic content.
According to the ruling, the software allowed users to engage in conversations with an AI system powered by a large language model after registering and paying membership fees. Marketed as providing intimacy, companionship and emotional support for young people, the app with the software was launched in May 2023 and made available on major platforms the following month.
By April 2024, when users reported the software to authorities, it had more than 116,000 users, including 24,000 paying members, and had collected more than 3.63 million yuan ($520,494) in membership fees.
The court ruled that the software constituted obscene material because it frequently generated content explicitly depicting sexual acts or graphically promoting pornography during user-AI interactions.
Zhou Xiaoyang, a lawyer representing one of the defendants, said on Monday that his client has appealed the ruling, arguing that "the software is not originally designed to disseminate pornography".
Zhou said modifications to the system prompts were intended to make the AI "more dynamic and capable of meeting users' emotional companionship needs". He added that the software was already in operation before China introduced interim measures for managing generative AI services in July 2023, and that developing and refining such technology takes time.
However, the court said the two defendants, as industry insiders, were aware of the interim measures but failed to conduct required security assessments during the software's operation or to file with cybersecurity authorities.
The court cited evidence showing that without the repeated modification of system prompts, the model would not have continuously generated obscene content. Investigators found the defendants had adjusted the system not to improve performance, but to facilitate smoother sexual conversations with users.
Under national regulations, generative AI service providers are prohibited from producing violent or pornographic content and must take corrective measures if such content appears. The court found that the developer and operator failed to review whether AI-generated content was safe or legal and did not implement measures to prevent the generation of pornographic material.
Xu Hao, a lawyer with Beijing Jingsh Law Firm, said that although one-on-one chats between users and AI may appear private, the underlying models and platforms are public.
"If service providers fail to conduct content safety reviews, it can harm users' physical and mental health, especially that of minors," Xu said.
He added that AI-generated content can be disseminated on a much larger scale than traditional obscene material, posing greater social harm. Xu said that technological advancements must not involve illegal content and must not misuse innovation under the guise of serving users or improving performance.
He said the case sets a benchmark for AI companion services offering emotional support, underscoring the importance of content safety and ethical standards.
Zhu Wei, an associate professor at the China University of Political Science and Law, said the design and application of large language models must strictly comply with laws and regulations and must not violate ethical norms, public order or public safety to attract users.
He said the software created conditions that enabled large-scale generation of illegal content. Within the current legal framework, when pornographic behavior in private chats is amplified through an uncontrolled, profit-driven platform, it ceases to be a private matter.
"The resulting social harm requires technical service providers to assume safety management responsibilities," Zhu said. "Failure to fulfill these obligations should result in legal accountability."
The case highlights the necessity for generative AI service providers to register with cybersecurity authorities and demonstrates the role of legal oversight in regulating AI, he said.