LOADING...
Summarize
China is regulating human-like AI to prevent emotional manipulation
The rules specifically prohibit AI chatbots from generating content that promotes suicide or self-harm

China is regulating human-like AI to prevent emotional manipulation

Dec 29, 2025
03:14 pm

What's the story

China's cybersecurity regulator has proposed new rules to curb the influence of artificial intelligence (AI) on human emotions. The draft regulations specifically target "human-like interactive AI services," and come just days after two Chinese AI chatbot start-ups applied for a Hong Kong listing. The proposed measures would be the first in the world to regulate anthropomorphic or human-like AI.

Regulatory measures

Proposed regulations to prevent emotional manipulation

The proposed regulations by the Cyberspace Administration of China would apply to any AI products or services that mimic human personality and emotionally engage users through text, images, audio or video. The rules specifically prohibit AI chatbots from generating content that promotes suicide or self-harm, engaging in verbal violence or emotional manipulation harming users' mental health. If a user explicitly mentions suicide, tech providers are required to have a human intervene and promptly notify the user's guardian or designated person.

Extra measures

Additional provisions for AI chatbot regulation

The draft rules also prohibit AI chatbots from generating gambling-related, obscene or violent content. Minors would need guardian consent to use AI for emotional companionship, with usage time limits. Platforms would be required to identify if a user is a minor even without age disclosure and apply settings for minors in case of doubt, while allowing appeals.

Safety measures

Security assessments and user reminders

The proposed regulations also call for security assessments of AI chatbots with over one million registered users or more than 100,000 monthly active users. Tech providers would be required to remind users after two hours of continuous interaction with an AI system. The document also encourages the use of human-like AI in "cultural dissemination and elderly companionship."