Skip to content

7.6 Legal Liability Issues for AI Products/Services

Section titled “When Intelligence “Errs”: The Dilemma of Determining Legal Liability for AI Products and Services”

Artificial intelligence (AI) systems are no longer confined to backend algorithms or cutting-edge concepts in labs. They are increasingly appearing as tangible, perceptible products or services deeply integrated into our lives and work, playing ever more important roles in critical societal domains—from cars equipped with advanced driver-assistance systems or partial autonomy navigating crowded city streets, to intelligent medical diagnostic systems assisting doctors in interpreting complex medical images; from financial “robo-advisors” offering personalized investment portfolio suggestions, to algorithms automatically reviewing and filtering vast amounts of online content, and even beginning to assist judges with intelligent case retrieval and lawyers with legal research and preliminary decision analysis inside and outside the courtroom.

However, an unavoidable reality remains: technology is not perfect, and intelligence can also “err.” When these systems—lauded as “intelligent” and expected to surpass human limitations, whether embodied as tangible hardware products (like autonomous vehicles, medical devices, smart robots) or manifested as intangible software services (like AI analytics platforms, online recommendation systems, cloud APIs)—malfunction, make serious misjudgments, fail to meet their claimed performance standards, or directly or indirectly cause various forms of harm (e.g., causing personal injury or death, property damage, infringing reputation or privacy, triggering significant economic losses, or harming other legal rights and interests) during their operation, an extremely thorny core question immediately surfaces, often facing numerous uncertainties under current legal frameworks:

“Who should bear legal responsibility for the harm caused by this ‘intelligent’ system?”

The chain of responsibility in the AI era can become exceptionally long and complex:

  • Is it the AI researcher or engineering team who designed the core algorithm, because their model inherently contained flaws or unconsidered risks?
  • Is it the developer team who implemented the algorithm into specific software code, due to negligence in programming, testing, or security hardening?
  • Are they the institutions or individuals who provided the massive training data, because the data itself was biased, erroneous, or infringed upon others’ rights?
  • Is it the operator or manufacturer who deployed the AI model into a specific application scenario and provided the product or service to end-users, because they failed to adequately assess risks, perform necessary localization adaptation, or fulfill duties of safety assurance and ongoing monitoring?
  • Or is it the end-user (individual or employee within an organization) who operated or relied on the AI tool in a specific context, due to improper operation, ignoring warnings, or placing unreasonable over-reliance on its capabilities?

When we attempt to directly apply our existing legal liability frameworks—primarily established to address harm caused by human actions (negligence or intent) and traditional technological products (usually physical, with relatively defined functions), such as the tort liability system, product liability system, and contract law system (e.g., centered around the Civil Code in jurisdictions like China)—to this entirely new technological form, which is extremely complex, often opaque in its internal decision-making processes, and whose behavior might even dynamically evolve through learning, we inevitably encounter a series of unprecedented theoretical interpretation dilemmas and practical operational challenges.

Section titled “1. The “Shortcomings” of Existing Legal Liability Frameworks When Facing AI Challenges”

Applying current liability rules (whether fault-based tort liability, product liability, or contract-based liability) from existing legal frameworks (like those based on the Civil Code) to harm caused by AI systems (especially complex, somewhat autonomous AI) often encounters several core obstacles that can “stall” legal application or lead to unfair outcomes:

  • The “Black Box” Problem & Extreme Difficulty in Proving Causation:

    • Core Challenge: Many cutting-edge AI systems, particularly those built on Deep Learning neural networks (e.g., Transformer architectures for LLMs, CNNs for image recognition, DRL for complex decisions), possess billions or trillions of parameters, making their decision logic and operational mechanisms extremely complex, highly non-linear, and largely opaque even to their developers, let alone external observers—like an impenetrable “black box” (principles and challenges repeatedly mentioned in Sec 2.8, 6.3, 6.4).
    • Disruptive Impact on Proving Causation: In traditional tort or product liability litigation, the plaintiff (victim) usually bears the burden of proof to demonstrate a direct, logical causal link required by law between the harm suffered and the defendant’s specific action (e.g., negligent act) or the product’s specific defect. However, when harm is caused by a “black box” AI system, it is extremely difficult, often technically impossible, for the victim to clearly prove with evidence that the harm was directly caused by a specific, identifiable defect or error within the AI system (e.g., was it the core algorithm design? biased/flawed training data? failure to generalize correctly to unforeseen input? hardware/network interference?). We might know the AI “made a mistake” causing harm, but pinpointing precisely “where” and “why” the error occurred in a way convincing to a court is often infeasible.
    • Consequence in Legal Practice: This almost insurmountable burden of proof regarding causation could mean that even when harm objectively exists and is closely related to the AI system’s operation, the victim might be unable to obtain due compensation under existing tort or product liability rules because they cannot meet the law’s stringent causation requirements. This clearly contradicts the fundamental legal goals of fairness, justice, and effective remedy for victims.
  • Ambiguity and Dynamism of the “Duty of Care” Standard for Relevant Actors:

    • Core Challenge: In tort law systems (like those based on the Civil Code), the fault liability principle (Negligence Liability) is often the primary basis for determining liability. A core element of fault is whether the actor (potential defendant) breached the legally recognized “Duty of Care”—the reasonable level of caution required to avoid causing harm to others. However, for AI, a novel and rapidly evolving technology, defining the appropriate standard of “reasonable care” for the various actors involved throughout its lifecycle—from algorithm designers and software developers, to data providers and annotators, to operators deploying AI systems into specific contexts, and even end-users interacting with AI tools—is a major new question lacking clear legal statutes or mature case law guidance.
      • What is the Developer’s Duty? Should their standard be that of a “reasonably prudent AI developer” at the time, or the “highest best practice standard of leading, responsible developers”? To what extent must they foresee potential misuse, abuse, or unintended negative consequences of their AI system in real-world applications? What specific level of diligence in ensuring training data quality/compliance, conducting thorough testing/validation, designing necessary safety features, assessing/mitigating bias, providing clear instructions/warnings constitutes fulfillment of their duty?
      • How Broad is the Deployer/Operator’s Duty? For organizations introducing third-party AI systems (or applications built on third-party models) into their business processes or offering them to end-users (e.g., companies using AI hiring tools, platforms offering robo-advice, hospitals using AI diagnostic aids), what standard of care applies to their selection of the AI system (sufficient tech assessment/risk due diligence?), configuration and deployment (reasonable parameters? secure integration?), ongoing operational monitoring (detecting performance degradation/anomalies?), maintenance and updates (timely security patches/model updates?), and establishment of effective human oversight, intervention, and emergency response mechanisms?
      • Where is the Balance for User’s Reasonable Reliance & Duty? For end-users (individuals or employees), what is the boundary of “reasonable reliance” on AI outputs? Under what circumstances does a user’s uncritical acceptance of AI-provided information or suggestions, without any verification or ignoring explicit limitations, constitute a breach of their own duty of care, potentially leading to shared or sole responsibility (e.g., contributory negligence)?
      • Challenge of Dynamism from Rapid Tech Evolution: AI technology itself is evolving extremely rapidly, along with its capabilities, risks, and best practices. This means the standard of “reasonable care” cannot be static but needs to dynamically adjust with technological progress, deepening industry understanding, and emerging risk awareness, posing challenges to legal certainty and predictability.
    • Consequence in Legal Practice: The lack of clear, uniform, and operational standards for the duty of care for various AI actors makes it very difficult and contentious to determine subjective fault in specific AI-related tort cases, significantly increasing judicial difficulty and outcome uncertainty.
  • Difficulties in Applying Product Liability Law (e.g., based on Product Quality Law / Civil Code):

    • Core Challenge: Product liability laws (like those found in Civil Codes or specific Product Quality/Liability Acts) typically establish liability (often strict liability, not requiring proof of fault, for manufacturers/sellers) for harm caused by “defective” products. However, directly applying this framework, primarily designed for tangible, physical products with relatively defined functions, to AI systems (especially those existing as pure software, online services (SaaS), or algorithmic modules within complex systems) encounters several fundamental questions and applicability hurdles:
      • Is AI a “Product” or a “Service”? The Characterization Dilemma: This is the threshold question. Should an AI system, especially one delivered via cloud APIs or SaaS, be legally characterized as a “Product,” thus subject to product liability rules (esp. strict liability, broader range of potential defendants)? Or should it be treated as a “Service,” primarily governed by contract law (breach of service agreement) or general fault-based tort rules? This characterization is critical as it impacts the nature of liability (fault/strict), scope of liable parties (manufacturer/seller focus), and the plaintiff’s burden of proof. Current laws often lack clear, uniform definitions for AI in this context, leading to potential inconsistency in judicial practice. AI software embedded within hardware (autonomous vehicles, medical devices) is more likely seen as part of a “product,” but the status of standalone AI software or online services is more debatable.
      • How to Define “Defect” for AI Systems? Even assuming AI is a “product,” what constitutes a “defect” in the product liability sense? Traditional law often identifies three types:
        • Design Defect: Unreasonable risk inherent in the product’s design. For AI, could an algorithm’s design logic itself, if containing foreseeable biases causing severe discrimination, lacking necessary safety redundancies, or failing essential explainability for critical decisions, be considered a “design defect”?
        • Manufacturing Defect: An individual product deviates from its intended design due to flaws in production. Is there a direct analogy for software/algorithms? If a trained AI model, due to its internal parameters/structure, exhibits unreliable or “hallucinatory” behavior on specific, unforeseen inputs (e.g., an LLM generating fabricated facts on certain prompts; an image recognizer consistently misclassifying a rare but important object), could this idiosyncratic, unexpected failure be analogized to a “manufacturing defect”? (Though fundamentally different from physical flaws).
        • Warning Defect / Failure to Warn: Failure to provide adequate instructions or warnings about proper use and potential risks. For AI systems (esp. those with ambiguous capabilities, unpredictable behavior, potential for misuse, like LLMs or ADAS), did the developer/provider adequately, clearly, and effectively disclose the system’s true limitations, known risks (hallucinations, bias), preconditions for safe use, and proper operating methods? Defining “adequate and effective” warning for complex AI is itself challenging.
    • Consequence in Legal Practice: Whether and how AI systems fit within the product liability framework directly affects the nature of liability (fault/strict), scope of defendants, plaintiff’s burden of proof, and potential remedies. This has major implications for victims seeking effective redress. Future legislation or judicial clarification is likely needed.
  • Diffused Responsibility Among Multiple Actors & Difficulty in Assigning Liability:

    • Core Challenge: Developing, training, deploying, and applying a complex AI system often involves a long, intricate chain of collaboration among multiple distinct actors and organizations. This chain might include:
      • Original data providers (public sources, commercial vendors, internal business data).
      • Teams/vendors performing data cleaning, labeling, preprocessing.
      • Core algorithm designers/researchers (academia or corporate labs).
      • Software engineers and data scientists training, tuning, testing models.
      • Cloud platform providers offering computing power and deployment environments.
      • Application developers or system integrators incorporating AI models into specific software/hardware products.
      • Organizations purchasing, deploying, configuring, and using the AI system in actual business operations.
      • Individual end-users operating the AI tool. When harm occurs, the root cause might not be a single factor but stem from issues at one or multiple points along this chain (biased data, flawed algorithm, buggy code, improper deployment, user error), or result from the interaction of multiple factors.
    • Consequence in Legal Practice: In such situations, clearly and accurately attributing legal responsibility solely to one specific actor, or fairly and reasonably allocating liability among multiple potential responsible parties, becomes extremely difficult. This “diffusion” or “fragmentation” of responsibility across many actors can lead to a “liability vacuum” (each party blaming others) or mutual finger-pointing, making the victim’s quest for identifying a clearly liable party and obtaining effective compensation exceptionally tortuous, lengthy, and uncertain.
  • Challenge to “Foreseeability” from AI’s (Limited) Autonomy & Emergent Behavior:

    • Core Challenge: Certain AI systems, especially those built using Reinforcement Learning or employing Continual/Lifelong Learning paradigms (e.g., advanced algorithmic trading strategies, adaptive recommendation systems, future robots optimizing behavior based on environmental feedback), are designed with some capacity for autonomous learning, adaptation to changing environments, and self-optimization of behavior. This means their specific behavior patterns and decision logic might dynamically evolve over time through interaction, potentially deviating from or exceeding what was directly anticipated or fully controlled by their original designers or deployers, exhibiting a degree of “autonomy” and potentially unpredictable “Emergent Behavior.”
    • Impact on Traditional Tort Theory: This limited “autonomy” and potential “behavioral evolution” pose a profound challenge to the core concept of “Foreseeability” relied upon in traditional tort law (especially in negligence determination and causation analysis). Traditionally, actors are typically liable only for harms they could reasonably foresee. If an AI system’s “autonomous” action (e.g., an autonomous vehicle’s unusual reaction in an extreme unforeseen scenario) was genuinely unforeseeable by the developer during design/testing (because it emerged from the system’s interaction with the environment), should the developer still be held liable (for negligence)? Does this require adjusting or abandoning the foreseeability requirement in certain AI tort scenarios? This challenges the fundamental principle of assigning liability based on the actor’s capacity to anticipate the consequences of their actions.
Section titled “2. Analysis of Potential Liable Parties and Their Possible Liability Bases (From a General Legal Perspective)”

In legal scenarios where harm is caused by AI systems (products or services), based on common legal principles found in many jurisdictions (like tort law, product liability, contract law, often reflected in Civil Codes), the following parties could potentially be held legally responsible. Analysis of their liability basis depends on specific facts, evidence, and applicable attribution principles:

  • AI Algorithm Designers / Software Developers:

    • Possible Liability Bases:
      • Negligence Tort Liability: If proven they breached their professional duty of care in algorithm design (e.g., failing to adequately consider safety, fairness, risks), model selection, software implementation (e.g., serious bugs), risk assessment, or testing/validation (insufficient testing, failure to cover key scenarios), and this negligence directly caused the harm. (Difficulty lies in proving negligence and causation, especially with black boxes.)
      • Product Liability (as part of Manufacturer): If the AI algorithm/software is integrated into a final “product” (e.g., smart medical device, autonomous vehicle) and is deemed “defective” (e.g., poses an unreasonable danger), they might share joint and several liability with the final product manufacturer under product liability rules (which often apply strict liability for harm caused by defects).
  • Training Data Providers / Annotators:

    • Possible Liability Bases:
      • Liability for Providing Infringing Data: If the training data itself infringed third-party rights (illegal source, contained unauthorized private info or copyrighted content), leading to infringing AI outputs/decisions, the data provider/annotator might be liable for the initial infringement.
      • Liability for Data Quality Issues (Contract or Tort): If they failed to ensure the accuracy, completeness, compliance, or lack of bias of provided data/labels as required by contract or law, and these data quality issues are proven to be a key direct cause of the AI model’s defect and resulting harm, they could theoretically be liable. Liability might first arise under their contract with the data recipient (e.g., breach of warranty), but tort liability to the ultimate victim cannot be entirely excluded if negligence directly caused harm.
    • Main Practical Obstacles: Proving a clear, direct causal link between a specific issue in the training data and downstream harm caused by an AI application is usually extremely difficult technically and legally. Liability allocation is often heavily influenced by prior contractual agreements regarding data quality, liability limits, and IP rights.
  • AI Model Trainers / Owners:

    • Possible Liability Bases: Entities responsible for the actual AI model training process, data selection/configuration, setting training objectives, and ultimately owning/controlling the trained model (could be the developers themselves or companies training models in-house) bear direct responsibility for quality control of the training process, compliance review of data, validation of the final model, risk assessment, and ongoing maintenance/updates. If harm results from negligence during training (e.g., using flawed data, insufficient validation) or if the delivered/deployed model itself is deemed “defective” (if part of a product), they could face tort or product liability.
  • AI System Deployers / Operators / Service Providers:

    • Possible Liability Bases: These are the key entities putting AI systems into practical use and offering products/services to end-users (e.g., company using AI for hiring; platform offering robo-advice; hospital using AI diagnostic aids; law firm/tech company providing online AI legal consultation). Their potential liability bases are diverse and often most direct:
      • Direct Negligence Liability: If they failed to exercise reasonable care in any of the following, causing harm:
        • AI System Selection & Onboarding: Insufficient technical assessment, risk due diligence, compliance checks when choosing third-party AI systems.
        • System Configuration, Deployment & Integration: Unreasonable parameter settings for the context; inadequate security measures (technical/organizational); unsafe/unreliable integration with existing systems.
        • Operational Monitoring & Maintenance: Failure to effectively monitor system status, performance, risks; failure to promptly address anomalies, errors, performance decay; failure to apply necessary security patches or model updates.
        • Lack of Effective Human Oversight & Intervention: Failure to establish clear operating procedures, provide adequate training/risk warnings to users, or implement necessary human review, intervention, and final decision-making steps, leading to over-reliance on AI or uncorrected errors.
      • Liability as “Product” Seller / Service Provider:
        • Product Liability (Seller): If the AI system (or hardware containing it) is deemed a “product” and is defective, the seller might bear secondary liability alongside the manufacturer (rules vary by jurisdiction, often allowing seller recourse against manufacturer unless seller was also negligent).
        • Liability for Breach of Safety Duty (Service Provider): If AI is provided as an online service (SaaS), the provider might, under certain laws (like China’s Civil Code Art. 1198 for managers of public venues), owe a duty of care to ensure safety. Failure to meet this duty (e.g., serious platform security flaws, inadequate management of user-generated risks) causing harm could lead to tort liability.
        • Liability as Network Service Provider: If the AI platform allows user content generation/publication, rules regarding liability for third-party infringement (e.g., “notice-and-takedown”) might apply.
      • Contractual Liability: If a service contract exists with the end-user, and the AI system’s failure or error leads to a breach of contract (failure to provide service as agreed, causing loss), the user can claim remedies for breach (specific performance, damages, etc.).
    • Key Consideration: Evaluating deployer/operator liability hinges on whether they exercised due diligence appropriate to the application context and risk level in selection, safe deployment, effective monitoring, timely maintenance, adequate disclosure, and establishing reasonable oversight.
  • End Users of AI Systems:

    • Possible Liability Basis: End users (individuals or employees) are not immune from responsibility. If a user:
      • Clearly misuses or abuses the AI system (e.g., malicious prompts, illegal purposes).
      • Ignores explicit warnings about capability limits and risks provided by the vendor/organization.
      • Places unreasonable, blind over-reliance on AI outputs, abdicating their own responsibility for critical thinking, verification, and professional judgment based on common sense, experience, or expertise.
      • Fails to follow operating procedures or best practices (e.g., skipping required validation steps). And this behavior directly or contributorily causes harm to themselves or third parties, the user themselves might bear corresponding fault liability. In cases of shared fault, their liability might reduce others’ damages (contributory negligence) or, in some cases, constitute primary or independent tort liability.
    • Typical Examples (Reiterated): Driver disengaging from driving while using L2/L3 ADAS; doctor blindly following an AI diagnostic suggestion conflicting with their experience without further checks; lawyer directly copy-pasting unverified LLM output containing fabricated cases or wrong statutes into legal documents.
  • Can the AI System Itself Be Held Liable? (Further discussed in Section 8.3 on AI Legal Personhood)

    • Under Current Legal Frameworks: No: As previously stated, under prevailing laws globally, AI systems themselves, no matter how “intelligent” or “autonomous” they appear, lack independent legal personality. They cannot legally hold rights, owe duties, or directly bear legal liability like natural persons or legal entities.
    • Ultimate Attribution of Liability: Therefore, regardless of how “autonomously” an AI system’s actions cause harm, the resulting legal responsibility must ultimately be attributed, through legal rules, to the relevant natural persons or legal entities behind it (designers, trainers, deployers/operators, users, etc.). The law’s task is to determine how liability should be fairly allocated among these human or organizational actors in different situations.
    • Future Possibilities: Whether and how to grant some form of legal status or limited personhood to potentially highly advanced future AI (AGI/ASI) is a complex philosophical and legal debate for the distant future, largely separate from the practical need to resolve AI liability issues now.

3. Exploring the Future: Potential Evolution of Liability Rules and Institutional Innovations

Section titled “3. Exploring the Future: Potential Evolution of Liability Rules and Institutional Innovations”

Facing the profound challenges and inadequacies AI poses to existing legal liability systems, legislators, judiciaries, academics, and industry organizations worldwide are actively considering and exploring ways to adjust, supplement, or even reconstruct relevant legal rules and institutional arrangements. The goal is to handle novel harm scenarios caused by AI more fairly, effectively, and predictably, striking a better balance among multiple objectives: protecting victims’ rights, allocating social risks reasonably, incentivizing responsible innovation, and clarifying behavioral expectations for all actors. Key directions and potential innovations include:

  • Adjusting or Clarifying the Scope and Rules of Product Liability Law for AI:

    • Legislative Clarification of AI as “Product”: Consider specific legislation or amendments to existing product liability laws to more clearly define which types of AI software, algorithms, or AI-driven services (e.g., those with standalone functions, standard delivery, potential safety risks) can be treated as “products” under the law, thus enabling the direct application of product liability rules (especially strict liability for manufacturers/sellers). This could lower the burden of proof for victims seeking redress from developers/providers.
    • Refining “Defect” Standards for AI: Develop more specific, operational criteria for determining if an AI system suffers from a design defect, manufacturing defect (or analogy), or warning defect. E.g.:
      • Design Defect: Could “foreseeable algorithmic bias causing severe unfair discrimination,” “lack of necessary safety redundancies or fail-safe mechanisms,” or “lack of essential explainability for critical decisions” be considered design defects?
      • Analogy for “Manufacturing” Defect: Could an AI model’s unreliable or “hallucinatory” behavior on specific inputs (if meeting certain conditions) be analogized to a manufacturing defect?
      • Warning Defect: How to define the adequate information AI developers/providers must disclose about capability limits, potential risks, data sources, algorithmic logic to fulfill their duty to warn? Should standards differ based on AI risk level/application?
    • Considering Special Strict Liability Regimes or Compensation Funds: For certain very high-risk AI applications with significant potential societal benefits (e.g., fully autonomous vehicles, AI controlling critical infrastructure, high-risk medical diagnostic AI), consider adopting approaches from areas like nuclear accidents or vaccine injuries: special legislation imposing stricter liability (e.g., higher liability caps or unlimited strict liability for operators/owners), or establishing mandatory, industry-funded no-fault compensation funds. This ensures victims receive timely and adequate compensation even if fault is hard to pinpoint or liable parties lack resources.
  • Clarifying and Refining Duty of Care Standards for AI Actors:

    • Need to gradually clarify and detail the specific content, requirements, and standards of the reasonable duty of care owed by AI system developers, deployers, operators, and end-users in different contexts. This can be achieved through accumulation of authoritative case law, supreme court guidance, or the development and promotion of relevant best practice guidelines, technical standards, or ethical codes by industry associations, standards bodies (like NIST in the US, TC260 in China), etc. Referencing authoritative documents like the NIST AI Risk Management Framework (AI RMF) can help translate core principles into operational duties. Clearer duty of care standards provide guidance for industry behavior and a more objective basis for judicial fault determination.
  • Exploring Adjustments to Burden of Proof Rules in Specific AI Tort Scenarios:

    • To effectively address the extreme difficulty victims face in proving causation and fault in AI tort cases (esp. due to “black box” issues and information asymmetry), explore adjusting traditional burden of proof rules under specific conditions (e.g., high-risk AI causing typical harm, plaintiff provides prima facie evidence linking harm to AI operation). Possible adjustments include:
      • Lowering the plaintiff’s standard of proof for causation.
      • Introducing presumptions of fact or causation: If plaintiff proves certain basic facts (e.g., AI system had known defect, harm typical of that defect occurred), presume fault or causation, shifting the burden of rebuttal to the defendant.
      • Strengthening defendant’s disclosure and explanation obligations: Granting plaintiffs broader rights in litigation to demand disclosure of relevant information about algorithm design, training data, test records, logs (balanced with trade secret protection), and requiring defendants to provide greater explainability for system decisions.
    • International Legislative Exploration: The EU’s proposed AI Liability Directive is a major legislative exploration in this direction. It proposes rules aimed at lowering the threshold for AI harm victims’ claims, including presumptions of causality under certain conditions and rights for claimants to request disclosure of evidence related to high-risk AI systems. If enacted, such exploratory legislation could have significant demonstrative effects globally.
  • Developing and Improving Mandatory Insurance or Industry Compensation Fund Mechanisms:

    • For AI applications with significant societal benefits but inherent residual risks (e.g., widespread autonomous driving, AI in inclusive finance or public health), consider models like mandatory third-party liability insurance (similar to auto insurance) or industry-funded compensation funds (like nuclear liability pools, vaccine injury funds). Requiring key industry players (developers, platforms, manufacturers) to contribute could establish mechanisms ensuring victims receive timely, adequate compensation regardless of fault attribution difficulties or liable party solvency. This helps socialize risks and provides a safety net.
  • Strengthening Pre-Market Regulation, Certification, and Standard Setting for AI Products & Services:

    • Shift some governance focus from post-hoc liability to ex-ante risk prevention. By strengthening market access regulation for AI products/services (e.g., the EU AI Act’s strict conformity assessment and CE marking for high-risk AI; China’s data export security assessments and algorithm filings), and promoting the development and application of relevant national/international standards and certification mechanisms (e.g., standards for AI model safety, reliability, fairness, explainability, privacy protection), a higher bar for safety, reliability, and compliance can be set before AI products/services enter the market. This helps prevent incidents leading to liability in the first place. Effective pre-market regulation is key to reducing downstream liability risks and social costs.
  • Clarifying and Detailing Legal Liabilities of Data-Related Parties:

    • AI performance and behavior are highly dependent on the data used. Therefore, clearer legislation, judicial interpretation, and robust contractual provisions are needed to further define the specific legal responsibilities and obligations of different actors in the data value chain—data providers, processors, annotators, etc.—regarding ensuring data source legality, content accuracy, quality reliability, and compliance with data security/privacy rules. This helps allocate responsibility more accurately when harm results from data issues.
Section titled “4. Profound Implications for the Legal Industry: Seizing Service Opportunities Amidst Risk Challenges”

The legal liability issues surrounding AI products and services not only pose significant challenges to the broader legal system but also present new business opportunities and higher professional demands for the legal industry itself, operating at the forefront of this transformation:

  • Providing Forward-Looking AI Risk & Compliance Legal Advisory Services Becomes a New Frontier:
    • As AI adoption accelerates across industries, all clients involved in developing, procuring, deploying, or using AI technologies (large tech firms, traditional companies undergoing digital transformation, AI startups) will urgently need expert legal support. They need help understanding the complex, evolving AI legal/regulatory landscape; accurately assessing various legal liability risks in their AI applications (tort, product liability, contract, data, IP, labor, antitrust, etc.); designing and implementing effective internal compliance management systems; and providing expert legal strategies when facing related disputes or regulatory inquiries. This opens up a vast, high-value new practice area for lawyers and firms capable of staying ahead of tech/legal trends and possessing interdisciplinary knowledge.
  • Handling Emerging, Complex AI-Related Litigation & Dispute Resolution Cases:
    • It is foreseeable that litigation, arbitration, or regulatory investigations involving harm caused by defects, errors, biases, or misuse of AI systems (products or services) will increase significantly. These cases are often characterized by high technical complexity (requiring understanding of algorithms, data, system interactions), novelty in legal application (interpreting existing rules for new scenarios, tracking emerging rules), and involvement of multiple complex parties.
    • Successfully handling such cases places new, higher demands on lawyers: requiring not only solid traditional legal expertise (tort, contract, product liability, evidence law) but also a considerable understanding of AI technology (at least basic principles, key risks, ability to communicate effectively with tech experts); needing to closely follow latest judicial precedents, legislative developments, and academic research in AI liability globally; and often demanding the ability to collaborate effectively across disciplines with technical experts, data scientists, AI ethicists to build evidence and legal arguments. This represents both a challenge and an opportunity for future litigators.
  • Legal Service Providers Must Strengthen Own AI Application Risk Management & Professional Responsibility Awareness:
    • Legal service providers (law firms, legal departments) themselves, when exploring and adopting AI tools to enhance internal efficiency or client services (e.g., using AI for legal research, contract review, document drafting, case management), must scrutinize and manage the resulting potential legal liability and professional ethics risks to the highest standards.
    • Institutions must ensure their internal AI usage practices fully comply with all relevant laws (esp. data security/privacy) and professional conduct rules; ensure they exercise reasonable care and supervision when using AI assistance; and ensure all AI outputs undergo thorough, professional human review and final validation to avoid harming client interests, breaching professional duties, or damaging the institution’s reputation due to improper technology use. The organization’s own AI governance maturity (Ref Section 6.5) is directly linked to its ability to operate safely and compliantly in the AI era.
  • Legal Professionals Need to Continuously Monitor Legislative Frontiers & Participate in Rule-Making:
    • AI liability rules represent a frontier area of law and technology globally, characterized by high activity, rapid evolution, theoretical controversy, and practical exploration. As legal professionals, we cannot merely be content with knowing existing rules. We must maintain a state of continuous learning and forward-looking perspective, closely tracking relevant draft legislation progress, landmark case rulings, authoritative academic research, and development of key industry best practices and standards both domestically and internationally.
    • Furthermore, legal professionals should actively participate in relevant legislative consultations, policy discussions, and standard-setting processes, contributing our unique insights and wisdom derived from legal expertise and practical experience. We should strive to help shape a legal liability system for the intelligent era that both encourages responsible technological innovation and effectively protects legitimate rights and interests, ultimately promoting social fairness and justice.

Conclusion: Cautiously Seeking Rule Boundaries Amidst Uncertainty, Wisely Embracing the Intelligent Future Within a Framework of Responsibility

Section titled “Conclusion: Cautiously Seeking Rule Boundaries Amidst Uncertainty, Wisely Embracing the Intelligent Future Within a Framework of Responsibility”

The legal liability issues arising from the widespread application of AI products and services undoubtedly represent one of the most challenging and profoundly significant major themes confronting the entire legal system as it strives to adapt and respond to disruptive technological change. Our existing liability frameworks, primarily designed around the accountability of human actions and the predictability of traditional physical products, indeed exhibit numerous conceptual ambiguities, logical inconsistencies, and practical maladaptations or even “failures” when applied directly to AI systems characterized by “black box” properties, high data dependency, complex multi-actor involvement, and even limited autonomous learning and behavioral evolution. Challenges are particularly acute in proving causation, defining reasonable standards of care, delineating between “products” and “services,” and fairly and effectively allocating responsibility within complex value chains.

Although under prevailing legal views globally, AI itself cannot yet be considered an independent subject of legal liability, and the consequences of its actions must ultimately be attributed through legal rules to relevant human or organizational actors behind it (designers, trainers, deployers/operators, users), the question of how to fairly, effectively, and operationally apply or adapt familiar principles like fault liability, product liability (including strict liability), and contract liability in this new technological reality remains a vast area of great uncertainty, urgently needing theoretical innovation and practical exploration.

Looking ahead, we can anticipate that as AI technology further develops and related case law accumulates, legal systems worldwide will likely undergo gradual evolution and development towards rules better adapted to the characteristics of AI. This might include: clearer legislative or judicial definitions of duties of care and liability boundaries for various actors; exploration of adjusted burden of proof rules in specific high-risk AI scenarios; introduction of special liability regimes (leaning towards stricter liability or no-fault compensation) for certain AI applications; and strengthening the role of pre-market regulation, technical standards, and certification in risk prevention.

For us as legal professionals, deeply understanding the inherent complexities in determining AI liability, the uncertainties in existing rules, and the potential future directions of development is key to being able to effectively protect client interests, manage our own professional risks, and actively participate in shaping a legal liability system fit for the intelligent age in this emerging and critically important field. This requires us not only to master traditional legal knowledge and skills but also to bravely embrace cross-disciplinary learning (understanding technology, risks, ethics), maintain forward-looking thinking, and practice adaptively amidst evolving rules and uncertainties. The next chapter will further explore the specific integration of AI into judicial processes, its potential, and the deeper challenges this entails.