The Cyberspace Administration of China (CAC) has introduced draft guidelines to regulate artificial intelligence designed to simulate human personalities and emotional interaction. The move aims to strengthen ethical supervision and user safety in the growing generative AI sector.
Key Safety and Ethical Requirements
- Preventing Harm: AI chatbots are prohibited from manipulating user emotions or encouraging dangerous behaviors, including self-harm, suicide, and gambling.
- Human Intervention: Service providers must implement systems to detect life-threatening situations. If a risk is identified, the interaction must be transferred to a human employee immediately.
- Social and Family Integrity: The guidelines prohibit AI from being marketed as a replacement for family bonds or acting as exclusive companions for the elderly.
- Ideology: All services must reflect China’s core socialist values.
Compliance and Oversight
The proposal introduces strict operational requirements for platforms with over one million users or 100,000 monthly active users:
- Security Assessments: Large-scale platforms must undergo mandatory safety evaluations.
- Parental Controls: Providers must offer robust mechanisms to protect minors.
- Emergency Contact: Companies are required to collect emergency contact information and notify designated individuals if a user faces immediate physical or financial danger.
These regulations signal a proactive approach by Chinese authorities to manage the social and psychological impact of consumer-facing AI.

