Info Pulse Now

Comparison of China-US AI Labeling Rules: A Practical Perspective


Comparison of China-US AI Labeling Rules: A Practical Perspective

In recent years, the rapid advancement of AI technology has spurred significant industry growth while posing unprecedented regulatory challenges. Authorities worldwide are actively exploring new regulatory models and methods to keep pace with technological progress. A notable example is the 2023 deepfake scandal involving pop superstar Taylor Swift, where AI-generated non-consensual pornographic images were widely shared on social media. This incident intensified calls for stricter legal measures to combat the misuse of deepfake technology, highlighting a key challenge in AI regulation: how to effectively identify AI-generated content.

To explore this issue, this article provides a practical comparison of China's and the U.S.'s AI labeling rules, focusing on their respective approaches, key requirements, and potential implications.

On September 14, 2024, Cyberspace Administration of China ("CAC") issued Measures for Labeling Artificial Intelligence Generated Synthetic Content (Draft for Comments)("AI Labeling Measures") and sought for public comments, alongside the mandatory national standard Cybersecurity technology-Methods for Labeling Artificial Intelligence Generated Synthetic Content (Draft for Comments). The latter provides more detailed practical guidance for AI content providers on how to label AI-generated and synthesized content ("AI-generated content") and outlines typical scenarios for content identification. These documents, along with the effective Interim Measures for the Management of Generative AI Services and Administrative Provisions on Deep Synthesis of Internet-Based Information Services, underscore China's efforts to refine and strengthen its generative AI governance framework

Similarly, in August 2024, the U.S. state of California introduced the California Digital Content Provenance Standards ("AB 3211"), which sets out strict rules on traceability, labeling, and the responsibilities of involved parties for AI-generated content. As global leaders in AI development, both China and the U.S. have adopted proactive regulatory frameworks. Despite differences in their approaches, both aim to ensure effective governance of AI-generated content through preemptive compliance measures.

I. Requirements to Obliged Parties

In terms of the types of obligated parties, the AI labeling Measures are more comprehensive compared to AB 3211. Online content distribution platforms, as the primary channels for disseminating AI-generated content, are also a crucial component in regulating AI content. The AI labeling Measures impose certain verification obligations on these platforms, including but not limited to user notification duties for content that has been clearly identified as AI-generated and proactive review and notification obligations for suspected AI-generated content. In contrast, under AB 3211, platforms are required to label any AI-generated content distributed through their services, and it prohibits platforms from allowing the download of AI-generated content without a watermark.

One notable feature of the AI labeling Measures is that it imposes certain obligations on internet application distribution platforms, a domain not yet addressed by AB 3211. According to the Administrative Provisions on Mobile Internet Applications Information Services, a mobile internet application distribution platform refers to an internet information service provider that offers distribution services for mobile internet applications, including releasing, downloading, and dynamic loading. In simpler terms, these are the app markets or app stores commonly used in daily life.

The AI Labeling Measures stipulate that when reviewing an app for release or launch, the internet application distribution platform must verify whether the service provider has implemented the required content labeling function. The logic behind this obligation is that app distribution platforms should be capable of, and responsible for, accurately identifying whether an app provides such labeling functions in accordance with the AI Labeling Measures. However, under current technological conditions, it remains to be seen whether various platforms, especially small and medium-sized ones, possess sufficient technical means to comprehensively verify whether service providers have implemented the required labeling functionalities.

Content labeling is a core regulatory measure shared by both the AI labeling Measures and AB 3211. The AI Labeling Measures provide more detailed guidance on how to label different types of content, which helps product providers better categorize and mark various kinds of content. In contrast, while AB 3211 does not differentiate between content types, it mandates that any content generated using generative AI tools must be labeled. Therefore, under AB 3211, all generative AI systems, regardless of content type or scale, must include a labeling function.

In addition to requiring watermark labels, AB 3211 further stipulates that all watermark and labeling content must be "difficult to remove," whereas the AI Labeling Measures only require labels to be explicit. Additionally, AB 3211 mandates that provenance data must be embedded in the synthetic content to help users track its origin. However, considering the growing sophistication of anti-labeling techniques and the widespread application of AI technology, many industry experts are skeptical about the practical implementation of such stringent requirements, holding that current technology may not be capable of providing reliable labels that meet the standards.

Content labeling is one of the primary regulatory measures for AI-generated content. Major jurisdictions, including China, the U.S., and the EU, have previously issued regulatory documents requiring relevant entities to fulfill labeling obligations. Although some regulations are still in the legislative process, they represent significant advancements in the governance of AI-generated content in their respective jurisdictions.

The regulatory challenges of AI-generated content stem from the rapid pace of AI development and its widespread accessibility. Unlike traditional content creation, AI allows users to generate realistic content quickly and easily, heightening the need for robust regulatory frameworks. Recognizing the limitations of traditional legislative approaches, the AI Labelling Measures, AB 3211, and other similar legislation have, to some extent, adopted a "prevention in advance, comprehensive governance" approach. These frameworks aim to distribute compliance responsibilities across all stakeholders, leveraging collective efforts for more effective regulation.

While both frameworks provide a solid foundation, certain areas require further refinement based on public feedback and practical application. The public comment period for the AI Labeling Measures concludes on October 14, 2024, and AB 3211 remains under legislative review. How these frameworks evolve will be critical in shaping the future of AI-generated content regulation.

Previous articleNext article

POPULAR CATEGORY

corporate

7136

tech

8242

entertainment

9012

research

4134

misc

9466

wellness

7203

athletics

9599