Introduction
TL;DR
- China’s Cyberspace Administration (CAC) released draft “Interim Measures” on anthropomorphic (human-like) interactive AI services on 2025-12-27, with public comments due by 2026-01-25.
- The draft applies to public-facing AI products/services in China that simulate human personality/thinking/communication styles and enable emotive interaction via text/images/audio/video.
- Core obligations include: (1) conspicuous “you are interacting with AI” notices, (2) pop-up reminders after 2 hours of continuous use, (3) detection and intervention for addiction/extreme emotions, (4) crisis playbooks with human takeover, and (5) safety assessments for large-scale services (e.g., 1M+ registered users or 100k+ MAU).
China’s new draft is not just about content moderation. It operationalizes “emotional safety” as product requirements for AI companions and other human-like conversational systems. Reuters summarizes it as a move to tighten oversight of AI that simulates human personalities and emotional interaction, including warnings against excessive use and interventions when addiction signs appear.
Why it matters: This shifts compliance from policy text to concrete UX/operations—time tracking, dynamic notices, escalation paths, and guardian controls become first-class features.
Scope and Definitions: What “Human-Like Interactive AI” Covers
What services are in scope
The draft covers AI services offered to the public in China that simulate human traits (personality, thinking patterns, communication styles) and interact emotionally through multiple modalities (text, image, audio, video).
Public consultation timeline
CAC published the consultation notice on 2025-12-27 and set the feedback deadline to 2026-01-25.
Why it matters: If your product positioning includes companionship, empathy, or relationship-like interaction, the draft’s scope language is broad enough to capture many mainstream conversational AI deployments.
Key Obligations: Warnings, Interventions, and Guardrails
Mandatory AI disclosure and dynamic notices
Providers must conspicuously inform users they are interacting with AI (not a natural person) and deliver dynamic notices at first use/re-login and when overdependence is detected.
2-hour continuous use reminder
If a user continuously uses the service for more than 2 hours, providers must push a pop-up reminder to pause.
Emotional state detection and intervention
Providers must assess user emotion/dependence (while protecting privacy) and intervene when users show extreme emotions or addiction-like behavior.
Crisis handling and human takeover
For explicit self-harm scenarios, the draft requires human takeover and measures to contact guardians/emergency contacts, with special registration-time requirements for minors/elderly users.
Safety assessment triggers (scale thresholds)
Safety assessments must be conducted and submitted to provincial CAC when certain triggers occur, including new feature launches/major changes and scale thresholds such as 1M+ registered users or 100k+ MAU.
Why it matters: These are implementable requirements—telemetry, risk classifiers, escalation tooling, and guardian consoles—rather than abstract principles.
How This Fits China’s Existing AI Governance Stack
Generative AI “Interim Measures” (2023)
China’s 2023 “Interim Measures for Generative AI Services” already regulates public-facing generative AI content/services in China and emphasizes “development and security” plus graded supervision.
AI-generated content labeling regime (effective 2025-09-01)
CAC issued the “Measures for Labeling AI-Generated Synthetic Content” on 2025-03-14, effective 2025-09-01, with a supporting mandatory national standard GB 45438-2025 (effective 2025-09-01).
Why it matters: The new draft adds a dedicated “emotive interaction & dependency” layer on top of existing rules for content governance and labeling—compliance becomes multi-layered.
Conclusion
- CAC’s 2025-12-27 draft targets anthropomorphic, emotive AI services and mandates warnings, interventions, and strong protections for minors.
- The draft specifies product-level requirements (AI disclosure, 2-hour pop-ups, crisis escalation with human takeover).
- Large-scale services face formal safety assessment/reporting triggers (e.g., 1M+ registered users or 100k+ MAU).
- This sits alongside China’s 2023 generative AI measures and the 2025-09-01 AI content labeling regime (incl. GB 45438-2025).
Summary
- Draft published: 2025-12-27; comments due: 2026-01-25.
- In scope: public-facing, emotive, human-like conversational AI in China.
- Key duties: disclosure, timeboxing, intervention, minors mode, safety assessments.
Recommended Hashtags
#ai #aigovernance #china #trustandsafety #privacy #compliance #chatbots #regulation #riskmanagement
References
- (China issues drafts rules to regulate AI with human-like interaction, 2025-12-27)[https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/]
- (Draft Interim Measures on Anthropomorphic Interactive AI Services, 2025-12-27)[https://www.cac.gov.cn/2025-12/27/c_1768571207311996.htm]
- (Interim Measures for the Administration of Generative AI Services, 2023-07-13)[https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm]
- (Measures for Labeling AI-Generated Synthetic Content, 2025-03-14)[https://www.cac.gov.cn/2025-03/14/c_1743654684782215.htm]
- (GB 45438-2025 Labeling Method for AI-Generated Content, 2025-02-28)[https://openstd.samr.gov.cn/bzgk/gb/newGbInfo?hcno=F32EA2A561F1886CD8D606513512D547]
- (AI companions meet the law: New York and California draw the first lines, 2025-12-23)[https://www.reuters.com/legal/litigation/ai-companions-meet-law-new-york-california-draw-first-lines–pracin-2025-12-23/]