In a country where the internet is tightly regulated by a swarm of censors and party yesmen, the rapid rise of AI-powered chatbots presents a new challenge for regulators and developers alike.
These systems, designed to generate instant responses and access vast amounts of information, test the limits of the government’s ability to monitor what is said and done on the Internet.
How can companies build chatbots that are both technically capable and still toe the party’s political line? That question has taken on new urgency following the emergence of DeepSeek, a domestic AI model that has drawn comparisons to ChatGPT.
China is now attempting to address the issue by pitching a new draft of rules for the creation of AI-powered chatbots.
Notably, Beijing is seeking to encode tougher guardrails on existing and future developments, and wants to ensure that AI chatbots do not contribute to self-harm or suicide, nor should they generate any violent, obscene, or gambling-related content.
Released on Saturday, the rules are now available for public discussion with a deadline set for January 25, 2026. China may have trailed in OpenAI’s success with the rollout of ChatGPT, but the country is not only closing the technological gap - it is also embarking on an area where others have been lagging - how to regulate this emerging tech.
Notably, China is attempting to address rules around AI chatbots that have to do with "emotional safety," rather than just filter out content that the party may see as at odds with its official policy and credo.
Apart from leveraging the technology to uproot what it considers dangerous and harmful content, China’s new rules seek to address some pressing societal issues, such as "cultural dissemination and elderly companionship."
China is often criticized for leveraging technology to impose its will on its citizens, but the country is also now a trailblazer in crafting rules around AI and its safe use.
Image credit: Unsplash.com
