Chinese government announces new labelling regulations for AI-generated content

In an effort to reduce misinformation and protect public interests, China has announced new regulations for labelling AI-generated content.

March 18, 2025
Matilda French

Following the Spanish government's approval of a bill that will impose hefty fines for not labelling AI-generated content last week, the Chinese government has announced similar new regulations starting from September 2025, requiring all AI-generated content to be clearly labelled, both visibly and in metadata.

The new measures, issued by the Cyberspace Administration of China (CAC) alongside the Ministry of Industry and Information, the Ministry of Public Security, and the National Radio and Television Administration, aim to combat the spread of misinformation and enhance transparency in the rapidly evolving world of AI-generated media.

Service providers, including large language models (LLMs), will be required to add explicit labels to all AI-generated content, including text, images, audio, videos, and virtual scenes.

Interestingly, the regulations allow for exceptions in cases of "social concerns and industrial needs," where users may request unlabelled AI-generated content. However, in such instances, the generating application must clearly communicate this requirement to the user and maintain detailed logs for traceability purposes.

The CAC has also explicitly prohibited the malicious removal, tampering, forgery, or concealment of AI labels. This prohibition even extends to falsely labelling human-created content as AI-generated, underscoring the government's commitment to the integrity of the labelling system.

These new regulations follow similar initiatives worldwide, such as Spain’s bill, which will impose fines of up to €35 million or 2-7% of turnover on companies that use AI-generated content without effectively labelling it as such.

Spain’s digital transformation minister, Oscar Lopez, told reporters, “AI is a very powerful tool that can be used to improve our lives… or to spread misinformation and attack democracy.”

Lopez noted that everyone is vulnerable to "deepfake" attacks - a term for AI-generated videos, photos, or audio that are presented as genuine.

As the implementation date approaches, the effectiveness of these measures in reducing misinformation and protecting public interests remains to be seen.

Give your teams access to GenAI from a secure environment with Narus.