Skip to content

7.2 Interpreting China's Core AI Laws and Regulations

Section titled “Parallel Paths of Regulation and Development: Interpreting China’s Core AI Legal Framework”

Facing the rapid development of artificial intelligence (AI) technology and its profound impact on all aspects of the economy and society, China has adopted a regulatory strategy that emphasizes both actively embracing innovation and prudently preventing risks. Differing from the paths taken by other major economies (e.g., the EU favoring comprehensive legislation, the US focusing on applying existing laws and guiding industry standards), China has not rushed to enact a unified, all-encompassing “Artificial Intelligence Law.” Instead, it prefers a legislative and regulatory model characterized by “small incisions, rapid responses, and prominent focus areas.”

This model features: on one hand, relying on and continuously strengthening the foundational legal framework of the digital economy, such as the Cybersecurity Law, Data Security Law, and Personal Information Protection Law, which provide the fundamental legal basis and guiding principles for AI governance; on the other hand, for specific applications of AI in different scenarios, especially those with more prominent risks, significant social impact, and rapid development (like algorithm recommendation, deep synthesis, generative AI services), China has proactively formulated and issued a series of specialized departmental rules or administrative provisions. These are supplemented by supporting national standards and technical guidelines to refine requirements, aiming for more targeted, operational, and timely regulatory coverage.

For any organization developing, providing, or using AI technologies and services within China (including law firms providing legal services, corporate legal departments, and legal tech companies supporting them), deeply understanding and strictly complying with this evolving AI legal and regulatory system with Chinese characteristics is the absolute prerequisite for ensuring their business activities are lawful, compliant, and sustainable.

Section titled “1. Foundational Legal Framework: The “Pillars” of AI Compliance Governance”

Before and after the introduction of AI-specific regulations, the following foundational laws, enacted in recent years, collectively form the legal cornerstone of AI governance and compliance in China, akin to the “pillars” supporting the entire structure:

  • Cybersecurity Law of the People’s Republic of China (CSL) - Effective June 1, 2017:

    • Core Positioning & Principles: China’s foundational law for cyberspace governance. Establishes the basic principle of cyberspace sovereignty and regulates activities related to constructing, operating, maintaining, and using networks within China.
    • Key Systems & Obligations:
      • Mandates network operators (including providers of AI services operating networks) fulfill network security protection obligations, such as implementing technical measures to ensure safe and stable network operation, developing emergency plans, preventing cyberattacks and data breaches.
      • Establishes the Multi-Level Protection Scheme (MLPS) for cybersecurity, requiring important network systems to adopt security measures corresponding to their level of importance.
      • Contains early basic rules for personal information protection (later systematically superseded and refined by the PIPL), such as requiring explicit purpose, means, scope disclosure and consent for collection/use, and data security obligations for network operators.
      • Sets up a security protection regime for Critical Information Infrastructure (CII), imposing stricter security requirements on network facilities and information systems in important sectors affecting national security and public welfare (e.g., energy, transport, finance, public services).
    • Relevance to AI:
      • Any AI system relying on a network environment must comply with CSL’s basic requirements for network operation security.
      • Using AI for cyberattacks, disseminating illegal/harmful information, stealing network data, etc., is subject to CSL regulation and penalties.
      • Early AI applications involving collection and use of personal information (e.g., user profiling) needed to adhere to CSL’s principles on personal information protection.
      • AI systems applied in the operation or security of CII will need to meet higher security standards.
  • Data Security Law of the People’s Republic of China (DSL) - Effective September 1, 2021:

    • Core Positioning & Principles: China’s foundational law for data processing activities and data security protection. Establishes the principle of balancing data security maintenance with promoting data development and utilization, aiming to safeguard data security, protect the lawful rights and interests of individuals and organizations, and uphold national sovereignty, security, and development interests.
    • Key Systems & Obligations:
      • Establishes a data classification and hierarchical protection system, requiring data to be protected based on its importance in socio-economic development and the potential harm if tampered with, destroyed, leaked, or illegally accessed/used. (Specific classification standards and catalogs of important data are still under development).
      • Mandates data processors (including organizations and individuals using AI for data processing) fulfill data security protection obligations, such as establishing comprehensive data security management systems, implementing appropriate technical safeguards, conducting risk monitoring, and handling emergencies.
      • Imposes stricter management systems for national core data.
      • Introduces strict regulatory requirements for processing activities involving data identified as “Important Data” (especially cross-border transfers), typically requiring a data export security assessment. (The criteria for identifying “Important Data” and specific catalogs are key focus areas and will significantly impact AI applications handling such data).
      • Regulates data trading activities, requiring data brokers to assume corresponding responsibilities.
      • Establishes a data security review system for data processing activities that affect or may affect national security.
    • Relevance to AI:
      • AI model training and operation inherently involve collecting, storing, using, processing, and analyzing massive amounts of data, all of which must fully comply with DSL’s data security obligations.
      • If the data processed by an AI system includes content designated by the state as “Important Data” (e.g., large-scale data from critical sectors like industry, telecom, transport, finance, natural resources), related data processing activities (especially cross-border transfers, such as using overseas AI models or cloud services) will face extremely strict regulatory scrutiny and approval requirements.
      • Using AI for data analysis, mining, and value creation must be predicated on not endangering national security, harming public interests, or infringing upon the lawful rights and interests of individuals and organizations.
  • Personal Information Protection Law of the People’s Republic of China (PIPL) - Effective November 1, 2021:

    • Core Positioning & Principles: China’s first comprehensive law specifically regulating personal information processing activities. Establishes principles of lawfulness, legitimacy, necessity, and good faith, with “informed consent” as the core legal basis (while also providing other bases like contract necessity, legal duty necessity). PIPL grants extensive rights to data subjects (including rights to know, decide, access, copy, correct, delete, withdraw consent, data portability) and sets clear and strict requirements for processing sensitive personal information, automated decision-making, cross-border provision of personal information, and obligations of personal information processors.
    • Relevance to AI (Extremely Close and Broad):
      • Compliance Basis for AI Processing Personal Information: Any AI application involving collecting, using, analyzing personal information (e.g., personalized recommendation algorithms, user profiling based on behavior, facial recognition access control, analysis of smart customer service interactions, diagnostic assistance by medical AI) must first ensure a clear, lawful basis for processing. The most common basis is obtaining individual consent, which must be voluntary, explicit, and based on full prior notification (purpose, manner, type, retention period, rights exercise methods, etc.). Processing must also adhere to principles like data minimization (type, scope, frequency necessary for the purpose) and openness and transparency.
      • Special Protection for Sensitive Personal Information: PIPL imposes stricter requirements for processing sensitive personal information (data which, if leaked or illegally used, could easily harm personal dignity or endanger personal/property safety, including biometric information (face, voiceprint, fingerprint, iris), religious beliefs, specific identities, health, financial accounts, location tracking, and information of minors under 14). Besides requiring a specific purpose and sufficient necessity, it mandates obtaining “separate consent” (a distinct consent specifically for processing sensitive info, not bundled in general terms) or, in certain cases, written consent. Processors must also conduct Personal Information Protection Impact Assessments (PIIAs). This has profound compliance implications for AI applications relying on biometrics (like facial/voice recognition) or processing sensitive health/financial data (e.g., AI security surveillance, medical image analysis, financial risk models).
      • Regulating Automated Decision-Making: For using AI algorithms for automated decision-making (activities relying solely or primarily on automated analysis of personal behavior, interests, health, credit status, etc., to make decisions, e.g., intelligent credit scoring, personalized insurance pricing, automated hiring screening, personalized content recommendations on online platforms), PIPL sets specific requirements:
        • Ensure Transparency: Inform individuals about the basic situation of automated decision-making.
        • Ensure Fairness and Justice: Must not impose unreasonable differential treatment on individuals regarding transaction prices or other terms (explicitly targeting practices like “big data price discrimination”).
        • Provide Right to Explanation and Refusal: Individuals have the right to request an explanation of the results of automated decisions affecting them; if such decisions have a significant impact on their rights and interests, they have the right to refuse decisions made solely by automated means and request human intervention. This indirectly imposes requirements on the explainability of relevant AI algorithms.
      • Strict Regulation of Cross-Border Personal Information Transfers: If an AI service or its supporting systems involve transferring personal information collected and generated within China to overseas recipients (e.g., using overseas AI model APIs, storing data on overseas servers, granting access to overseas affiliates), one of the four statutory conditions under PIPL must be met:
        1. Passing a security assessment organized by the Cyberspace Administration of China (CAC) (applicable to processors handling important data, CII operators, or processing personal information reaching specified thresholds).
        2. Obtaining personal information protection certification from a professional institution according to CAC provisions.
        3. Concluding a standard contract formulated by the CAC with the overseas recipient.
        4. Meeting other conditions stipulated by laws, regulations, or the CAC. Additionally, separate consent from the individual is required, and a PIIA must be conducted. This presents significant compliance challenges for legal institutions or businesses needing to leverage global AI technologies/services or whose operations involve cross-border data flows.
      • Indirect Push for Algorithm Transparency & Explanation: While PIPL doesn’t explicitly grant a right to “meaningful information about the logic involved” for all automated decisions like GDPR, the rights to request explanation and refuse solely automated decisions effectively place practical demands on the transparency and explainability of relevant AI algorithms. Processors need to be able to explain the basic logic and main influencing factors of their automated decisions to some extent.
  • E-commerce Law of the People’s Republic of China - Effective January 1, 2019:

    • Relevance to AI: This law explicitly states that e-commerce operators providing personalized recommendations of goods or services based on consumer interests, consumption habits, etc., must simultaneously offer options not targeted to their personal characteristics. Furthermore, it prohibits using algorithms, big data, etc., to impose unreasonable differential treatment regarding transaction prices or other terms based on consumer preferences or transaction habits (explicitly banning “big data price discrimination”). This directly regulates the application of AI in personalized recommendation and pricing within e-commerce.

2. Specific Regulations and Standards for AI Applications: The “Small Incision, Rapid Response, Precision Strike” Model

Section titled “2. Specific Regulations and Standards for AI Applications: The “Small Incision, Rapid Response, Precision Strike” Model”

Building upon the macro principles and baseline requirements set by the foundational laws, Chinese regulatory bodies (primarily the Cyberspace Administration of China - CAC, Ministry of Industry and Information Technology - MIIT, Ministry of Public Security - MPS, National Radio and Television Administration - NRTA, etc.) have demonstrated high agility and proactiveness. They have swiftly and precisely issued several landmark departmental rules or administrative provisions targeting specific AI technology application areas known for prominent risks, rapid development, and high public attention. This “small incision, rapid response” regulatory model, supplemented by evolving national standards and technical guidelines, contrasts with the EU’s approach of seeking a single comprehensive law, reflecting China’s unique strategy for balancing innovation and risk while quickly addressing technological challenges.

  • Provisions on the Management of Algorithm Recommendation in Internet Information Services - Effective March 1, 2022:

    • Regulated Entities: Clearly targets all providers using algorithm recommendation technology (including generative/synthetic types, personalized push types, ranking/selection types, search/filter types, scheduling/decision types, etc.) to offer internet information services (e.g., news apps, short video platforms, social media, search engines, e-commerce platforms, food delivery platforms).
    • Core Requirements:
      • Algorithm Filing System: Providers of algorithm recommendation services with public opinion attributes or social mobilization capabilities must file their algorithms with the CAC. Aims to enhance transparency and traceability (multiple batches of filings completed).
      • User Rights to Know and Choose: Must inform users conspicuously, clearly, and understandably that algorithm recommendation is provided, and disclose the basic principles, purpose, and main operating mechanisms. Crucially, must provide users with options not targeted to their personal characteristics and convenient options to turn off algorithm recommendation services. Service must cease immediately upon user opt-out.
      • Protecting Users’ Lawful Rights:
        • Must not use algorithms to implement unreasonable differential treatment (e.g., price discrimination).
        • Must not use algorithms to induce users into addiction, excessive consumption, or other illegal/unethical behaviors.
        • Must implement special protections when serving minors, restricting content potentially harmful to their well-being.
        • Must consider the habits of elderly users, providing convenient and suitable intelligent services.
        • Must protect the lawful rights of workers (e.g., gig workers like delivery riders, ride-hailing drivers) when using algorithms for work scheduling (rest rights, reasonable pay).
      • Strengthened Content Management: Providers must establish feature libraries to identify illegal/harmful information; must not use algorithms to recommend prohibited information or engage in activities disrupting socio-economic order or harming public interest; must conspicuously label information generated or synthesized by algorithms (e.g., AI-generated summaries, comments).
  • Provisions on the Administration of Deep Synthesis in Internet Information Services - Effective January 10, 2023:

    • Regulated Entities: Targets deep synthesis service providers (platforms offering face-swapping apps, AI art tools, voice cloning services, AI writing assistants, etc.) and technical supporters (providing underlying tech/tools) that use deep learning, virtual reality, etc., to generate or edit (“deep synthesis”) online information like text, images, audio, video, virtual scenes. Covers almost all mainstream AIGC technologies.
    • Core Requirements:
      • Mandatory & Conspicuous Labeling: One of the most crucial and landmark requirements. Explicitly mandates that content generated or edited using deep synthesis (especially images, videos) must be labeled using technical measures that don’t hinder use (termed “implicit labels” in later Measures) and prominently identified (termed “explicit labels” in later Measures) to clearly inform the public about the deep synthesis, avoiding confusion or misidentification.
      • More Prominent Labeling for High-Risk Content: Requires even more conspicuous labeling for deep synthesis content likely to cause public confusion or misidentification (e.g., simulating images or voices of specific individuals like celebrities or officials).
      • Explicit Prohibitions: Strictly prohibits using deep synthesis services for any illegal activities, especially endangering national security/interests, harming national image, inciting separatism, disrupting social stability, spreading false news, infringing others’ reputation, portrait rights, privacy, intellectual property, etc.
      • Real-name Verification of Users: Providers are required to verify the real identity of service users based on mobile numbers, ID numbers, or unified social credit codes.
      • Security Assessment & Algorithm Filing: Providers and technical supporters with public opinion attributes or social mobilization capabilities must also undergo security assessments and complete algorithm filing procedures (deep synthesis algorithm filings have also commenced).
  • Interim Measures for the Management of Generative Artificial Intelligence Services - Effective August 15, 2023:

    • Regulated Entities: Specifically targets organizations and individuals using generative AI technology (defined as models and related tech capable of generating content like text, images, audio, video - i.e., core AIGC tech) to provide services to the public within the territory of the PRC. Note: Primarily regulates direct “public-facing services,” not internal use, R&D, or services not offered to the Chinese public.
    • Core Principles & Regulatory Approach: Explicitly states the principle of balancing development and security, promoting innovation and governing according to law. Adopts inclusive prudence and classified, hierarchical supervision towards generative AI, reflecting a strategy of encouraging progress while setting necessary boundaries.
    • Core Requirements:
      • Ensuring Lawfulness of Data Sources & Model Training:
        • Take effective measures to prevent discrimination based on ethnicity, belief, nationality, region, gender, age, occupation, health, etc., in generated content.
        • Respect intellectual property, business ethics, and trade secrets during algorithm design, training data selection, model generation, and optimization.
        • Must respect others’ lawful rights, not harm physical/mental health, not infringe portrait rights, reputation rights, privacy rights, personal information rights.
        • Sources of data used for pre-training and optimization training must be lawful, including not infringing IP or personal information rights. If personal information is used, PIPL must be complied with. (Related draft national standards like “Information Security Technology - Security Specification for Pre-training and Optimization Training Data of Generative Artificial Intelligence” provide further details).
      • Compliance in Personal Information Processing: When processing user personal information during service provision, must strictly comply with PIPL and other laws, not illegally collect personal info, obtain consent (or meet other lawful bases), fulfill notification duties and protection responsibilities.
      • Conspicuous Labeling of Generated Content: Referencing the Deep Synthesis Provisions, requires providers to effectively label generated image, video, etc., content according to regulations. (Further detailed by the latest Labeling Measures).
      • Content Security & Risk Prevention: Providers bear responsibility as online content producers, must take effective measures (technical and manual review) to prevent generation and dissemination of illegal, false, harmful content; establish mechanisms for handling user complaints and reports.
      • Algorithm Filing & Security Assessment (Dual Filing): Providers offering generative AI services with public opinion attributes or social mobilization capabilities must undergo security assessments and complete both algorithm filing and generative AI service filing (“large model filing”). In practice, a “dual filing” mechanism for large models has emerged, with filings increasing significantly after 2024. Technical guidance documents like “Basic Security Requirements for Generative Artificial Intelligence Services” (issued 2024) provide key bases for security assessments, covering corpus security, model security, etc.
      • Clear Service Agreements & User Responsibility: Need clear service agreements defining rights/obligations, requiring service users to also comply with laws, not using the service for activities harming national security, public interest, or infringing others’ rights.
      • Compliance During Model Training: Even during pre-training and optimization training R&D phases, relevant laws like CSL, DSL, PIPL must be observed.
  • Measures for the Identification of AI-Generated Synthetic Content - Released March 7, 2025, Effective September 1, 2025:

    • Issuing Bodies: Jointly issued by CAC, MIIT, MPS, and NRTA.
    • Regulated Entities: Applies to network information service providers required to conduct AI-generated synthetic content identification under the Algorithm Recommendation, Deep Synthesis, and Generative AI regulations. Also imposes requirements on content dissemination platforms and users.
    • Core Requirements:
      • Refines and Strengthens Labeling Requirements: Further details and operationalizes the labeling requirements mentioned in the earlier Deep Synthesis and Generative AI rules.
      • Distinguishes Explicit and Implicit Labels:
        • Explicit Labeling: Refers to text, sound, graphics, etc., added to the content or interface that are clearly perceptible to users. Must be added in situations specified by Deep Synthesis Provisions Article 17(1) (likely to cause public confusion or misidentification), with specific requirements for placement and format in text, audio, images, videos, virtual scenes.
        • Implicit Labeling: Refers to technical measures adding identifiers within the content file data (metadata), not easily perceived by users, containing information like synthesis attribute, service provider, content ID. Providers required to add implicit labels per Deep Synthesis Provisions Article 16.
      • Clarifies Platform Responsibilities: Platforms providing content dissemination services need to:
        • Verify if user-uploaded content metadata contains implicit labels.
        • Make explicit labels prominent if detected in user uploads.
        • If no implicit label is verified and user doesn’t declare, but the platform detects explicit labels or traces of synthesis, it should identify it as suspected synthetic content and add a prominent notice label nearby.
      • User Responsibility: Users publishing synthetic content via platforms should actively declare it and use the provider’s labeling function.
      • Prohibits Malicious Tampering: Any organization or individual must not maliciously delete, tamper with, forge, or conceal the required synthetic content labels.
      • Links to Filing/Assessment: Providers should submit labeling-related materials during algorithm filing, security assessment processes.
    • Significance: Aims to help the public discern disinformation through standardized labeling, enabling full-chain governance from generation to dissemination, ensuring accountability, and providing technical support for traceability. Related technical standards like “Cybersecurity Technology - Identification Methods for AI-Generated Synthetic Content” are under development.
Section titled “3. Core Characteristics and Development Trends of China’s AI Regulatory Framework”
  • Agile Response & Precision Strikes: Regulators react quickly to specific risks emerging from technological development (e.g., filter bubbles/discrimination from recommendations, authenticity issues with deepfakes, hallucinations/content risks with generative AI, and now the need for unified content labeling via the new Measures), issuing targeted specific regulations, demonstrating regulatory agility.
  • Security First, Multiple Goals Balanced: Maintaining national security, social stability, and ideological security are given paramount importance in regulatory goals. Simultaneously, aims to protect lawful rights of individuals/organizations and promote technological innovation and healthy industry development.
  • Emphasis on Platform Responsibility & Source Governance: Regulatory focus is largely on platform providers and developers offering AI services/tech, requiring them to assume multiple responsibilities for content management, data security, algorithm transparency, user protection. Mechanisms like algorithm filing, large model filing, and security assessment attempt source governance for impactful AI services.
  • Close Linkage with Foundational Legal Framework: All specific regulations explicitly require compliance with foundational laws like CSL, DSL, PIPL, forming a legal application system of “Basic Laws + Specific Regulations.”
  • Increasing Importance of National Standards & Technical Guidance: As regulation deepens, national standards (GB/T) and technical guidelines (like “Basic Security Requirements for Generative AI,” various draft standards on data training, labeling, content identification) play a growing role in detailing requirements, providing operational guidance, and supporting assessments/filings.
  • Emphasis on Balancing Development & Regulation: Policies repeatedly stress “seeking development through regulation, promoting regulation through development,” reflecting an intent to find equilibrium between encouraging innovation and mitigating potential risks (e.g., the “inclusive prudence” and “classified, hierarchical supervision” principles in the Interim Measures).
  • Continuous Evolution & Dynamic Improvement: China’s AI regulatory system is far from final and remains in a state of rapid development and constant refinement. As technology applications deepen and new issues emerge, it is highly likely that more comprehensive, detailed, or even unified laws and regulations will be introduced in the future. An “Artificial Intelligence Law” draft is listed in the State Council’s legislative work plan as a reserve item, indicating a long-term goal towards comprehensive legislation, but departmental rules and standards currently dominate.
Section titled “4. Key Implications and Compliance Points for the Legal Industry Operating in China”
  • Ensure Own Operations are Compliant:
    • If law firms or corporate legal departments develop or deploy any public-facing generative AI services (e.g., website chatbots, internal compliance Q&A systems), they must strictly comply with the Interim Measures for Generative AI Services, fulfill security assessment and filing (large model and algorithm) obligations (if applicable), ensure training data is lawful, manage generated content, and adhere to the labeling obligations under the latest AI-Generated Synthetic Content Identification Measures.
    • If deep synthesis technology is used in marketing, internal training, or client communication (e.g., virtual lawyer avatars, AI-synthesized voice notifications), it must be conspicuously labeled according to the Deep Synthesis Provisions and the Identification Measures.
    • In all AI applications involving processing personal information (client, employee, opposing party data), strict compliance with all PIPL provisions is mandatory, especially regarding notice-consent, sensitive information handling, data security safeguards, and cross-border transfers.
  • Advising Clients on AI Compliance is a New Frontier:
    • As China’s AI regulatory system becomes more elaborate and complex (esp. with new filing, assessment, labeling requirements), advising clients (esp. tech companies, internet platforms, content creators, and enterprises using AI across industries) on compliance regarding algorithm recommendation, deep synthesis, and generative AI services will be a significant and growing legal service demand.
    • Lawyers need to help clients understand and comply with these complex requirements, assisting with algorithm/large model filings, security assessments (referencing standards like “Basic Security Requirements”), implementing content labeling (per the Identification Measures), developing internal policies, drafting user agreements/privacy policies, responding to regulatory inquiries, etc.
    • In transactions involving AI technology licensing, joint development, data trading, or M&A deals with AI components, lawyers must treat AI-related legal compliance (data, algorithm, IP compliance, filing/assessment status, labeling compliance) as a core aspect of due diligence and contract negotiation.
  • Potential Impact on Evidence Rules Requires Attention:
    • The labeling requirements for AIGC content under the Deep Synthesis Provisions and especially the Identification Measures provide clearer external clues and reference points for courts when assessing the authenticity of related digital evidence in the future. Clear labels help distinguish AI-generated content from original information, but the prohibition on malicious tampering must also be noted. Labeling itself doesn’t conclusively prove or disprove authenticity; judgment still requires technical forensics and consideration of all evidence.
  • Maintain High Sensitivity to Legislative & Standards Dynamics: China’s AI legal, regulatory, and standards environment is changing very rapidly. Legal professionals must establish effective channels (follow official releases, subscribe to professional news, attend industry seminars) to continuously monitor and learn about the latest legislative progress (like AI Law developments), new regulations (like implementation rules for Identification Measures), enforcement cases, and standards development (security requirements, data specs, identification methods) to provide the most timely and accurate compliance advice to themselves and clients.

Conclusion: Advancing Within Norms, Refining Through Development

Section titled “Conclusion: Advancing Within Norms, Refining Through Development”

Through a combination of foundational laws, specific regulations targeting particular AI applications, and increasingly detailed national standards and technical guidelines, China has preliminarily constructed an AI regulatory framework with its own characteristics, emphasizing both security and development, and currently in dynamic evolution. The latest AI-Generated Synthetic Content Identification Measures further refine the governance chain and strengthen transparency requirements. This framework imposes clear and growing compliance obligations on all entities (including legal service providers and their clients) that develop, provide, or use AI technologies and services within China.

For legal professionals, deeply understanding and strictly adhering to this regulatory system is not only a baseline requirement for their own compliant operation but also a core capability for providing high-quality, forward-looking legal services to clients. While leveraging AI technology to enhance efficiency and innovate services, compliance awareness must be integrated into every aspect of AI application, ensuring technological progress always advances steadily within the bounds of the rule of law. The next chapter will delve into the challenging intersection of AI and intellectual property, a globally pertinent legal issue.