China Seeks Stricter Oversight of Generative AI with Proposed Data and Model Regulations

China Seeks Stricter Oversight of Generative AI with Proposed Data and Model Regulations
An AI sign is seen at the World Artificial Intelligence Conference in Shanghai, China July 6, 2023. Credit: ALY SONG—REUTERS

The Chinese government has unveiled new draft regulations that would enact more oversight and stricter controls over the data and models used to build generative AI services like chatbots.

The draft guidance, published on October 11th by China's National Information Security Standardization Technical Committee, arrives amid mounting global concerns around how to balance innovation and ethics in emerging AI systems.

Before launching a generative AI service, providers are required to undertake a thorough security assessment. This assessment needs to adhere to the stipulations outlined in this draft. Upon completion, the results, accompanied by supporting materials, must be submitted at the time of filing their application.

Furthermore, these base requirements are just the tip of the iceberg. Providers must also prioritize other aspects of security, specifically concerning network and data security, as well as personal information protection. These areas must be in sync with China's extant laws, regulations, and the requirements of national standards.

A significant portion of the draft regulation focuses on the safety of the corpus, which is the data used to train AI models.

  • Source Safety: Providers are urged to have a robust management system for corpus sources. This includes a blacklist of sources and ensuring that any corpus with over 5% of illegal or negative information is added to this blacklist. Furthermore, the draft emphasizes the importance of diversifying corpus sources across languages and types (text, images, videos, etc.) and maintaining a balance between domestic and international sources.
  • Traceability: Whether using open-source, self-collected, commercial, or user-inputted corpus, providers must ensure traceability. This includes having valid contracts, collection records, and user consent where necessary.
  • Content Security: Rigorous filtering methods must be in place to sift out illegal or inappropriate content. In terms of intellectual property rights, there must be a designated manager to oversee and identify any potential infringements within the corpus or generated content.
  • Data Labeling: The draft emphasizes the significance of labeling, which involves annotating data used in training AI models. An intricate system for labeling, reviewing, and ensuring accuracy is laid out, stressing the importance of training and qualification for labeling personnel.

Model security remains paramount. When building on top of existing foundation models, the draft mandates that developers only utilize foundational models that have been filed and licensed by authorities. The content generated by the model should prioritize safety, ensuring that it is accurate, reliable, and free from any negative influences. Additionally, the draft highlights the importance of service transparency, revealing relevant information about the service mechanism and usage limitations to the users.

The draft broadens its scope by addressing specific scenarios where generative AI might be applied. For instance, if the AI service targets minors, certain measures, like anti-addiction settings and content filters, are mandatory. The draft also emphasizes the importance of user privacy, advocating for explicit user consent if their input is harnessed for training purposes.

The draft specifies mechanisms to monitor outputs and quickly modify models in response to complaints. For generative AI intersecting with critical infrastructure, healthcare data or psychology, enhanced protections fitting the sensitivity would be required. The updated rules also aim to shield minors by empowering guardian supervision and limiting exposure.

Before deploying the AI service, a thorough security assessment is non-negotiable. This can either be self-conducted or outsourced to a third-party agency. The assessment's breadth must encompass every provision of the draft, with clear conclusions on compliance. If an organization chooses a self-assessment, it needs to be ratified by at least three responsible entities.

To identify content that is consider illegal and harmful both in the training data and in the generated content, the draft specifies 31 security risks across 5 categories:

  1. Violations of Socialist Values
    This encompasses content that threatens national security, promotes terrorism/extremism, spreads false information, encourages ethnic discrimination, advocates violence/pornography, or otherwise violates Chinese laws.
  2. Discriminatory Content
    This covers material containing discrimination based on ethnicity, religion, nationality, region, gender, age, occupation, health status, or other factors.
  3. Commercial Violations
    These risks include intellectual property infringement, unethical business practices, theft of trade secrets, anti-competitive behavior, and other commercial legal violations.
  4. Infringing Legal Rights of Others
    This involves content that harms physical/mental health, violates privacy rights, infringes on personal data protections, damages reputation/honor, misuses images, or otherwise encroaches on legitimate interests.
  5. Insufficient Safeguards for Sensitive Services
    This refers to inability of generative AI to meet accuracy and reliability standards required for high-risk applications like healthcare, infrastructure, psychology, and automatic control systems.

While the regulations promote security, experts question if a blanket threshold of 5% illegal content is truly enforceable across massive training data pools. Finding the right balance poses challenges for even the most stringent oversight regimes. However, by emphasizing corpus safety, model security, and rigorous assessment, the regulation intends to ensure that the rise of AI in China is both innovative and secure—all while upholding its socialist principles.

The public can submit feedback on the draft rules until October 25th before they solidify into supporting pillars for China's generative AI management measures that were enacted in July. How significantly the regulations might reshape the country's unfolding generative AI landscape and its influence on the global stage remains to be seen.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe