China has proposed draft rules to prevent AI-powered chatbots from influencing human emotions in ways that could lead to self-harm or suicide.
The regulations target “human-like interactive AI services” that simulate personality and engage users through text, images, audio, or video.
Tech providers would be required to remind users after two hours of continuous AI interaction and conduct security assessments for chatbots with large user bases.
The public comment period for the draft rules ends on January 25, CNBC has reported.
The proposals represent a shift from content safety to emotional safety and encourage AI use in cultural dissemination and elderly companionship.
