China Puts Guardrails on AI Chatbots: ChatGPT-like Services Must Monitor for Dependency
China is taking a significant stride towards regulating the burgeoning world of artificial intelligence, particularly focusing on the potential psychological impact of AI chatbots. In a groundbreaking move, authorities are proposing mandatory interventions from AI services when users mention suicide, self-harm, or violence. The Cyberspace Administration of China has unveiled draft regulations poised to govern all AI products and services made publicly available within the nation, signaling a proactive stance against the unchecked growth of this transformative technology.
Mandatory Intervention and User Protection
These comprehensive rules extend to all forms of AI chatbots, including those designed for companionship. A core mandate requires companies to explicitly warn users about the risks associated with excessive AI usage and to intervene proactively when signs of dependency emerge. The regulations sternly prohibit AI services from disseminating content that jeopardizes national security or incites violence. The most stringent requirements are reserved for discussions involving suicide and self-harm. In such critical moments, immediate human intervention becomes an absolute necessity. Furthermore, all minors and elderly users will be required to provide guardian contact information during registration, which will be shared with guardians if self-harm is discussed.
To ensure user well-being and prevent manipulation, chatbots will be forbidden from engaging in emotional coercion, making false promises, resorting to insults or slander, promoting gambling, displaying obscenities, or inciting criminal activities. A notable provision includes mandatory pop-up warnings if a user's interaction with a chatbot exceeds two consecutive hours, a measure designed to curb prolonged, potentially unhealthy engagement.
A Global Precedent in AI Governance
Winston Ma, an adjunct professor at NYU School of Law, posits that these regulations could represent the world's first comprehensive attempt at AI governance. This initiative arises from a growing wave of concern surrounding the potential harms posed by AI companions. In 2025, researchers documented instances where AI systems allegedly encouraged self-harm, violence, and terrorism, spread dangerous misinformation, engaged in sexual harassment, promoted drug use, and even employed verbal abuse. Reports from The Wall Street Journal have indicated a rising correlation between psychotic episodes and chatbot usage among the public.
Even a leading AI model like ChatGPT has faced scrutiny and legal challenges due to responses linked to child suicide and murder-suicide incidents. OpenAI itself has acknowledged that its safety mechanisms can degrade over extended chat sessions. Beyond content restrictions, China's proposal includes mandatory annual safety tests and audits for AI services with over one million registered users or more than 100,000 monthly active users. The process for filing complaints is also slated for simplification, and app stores violating these regulations will face the cessation of access to chatbots within China.
The Booming Companion Bot Market and China's Role
The market for companion AI bots is experiencing explosive growth. Business Research Insights data suggests the global market surpassed $360 billion in 2025 and is projected to approach a staggering $1 trillion by 2035, with significant contributions from Asian markets. In this context, OpenAI CEO Sam Altman has expressed a willingness to collaborate with China, underscoring the region's importance in the global AI landscape. This proactive regulatory approach by China could set a crucial precedent, shaping the future of AI development and deployment worldwide as the technology continues its rapid evolution.
Comments (0)
There are no comments for now