Table of Contents

Developments in the field of artificial intelligence (AI) are evolving rapidly. AI technology and applications have existed for decades, but recent years have seen significant advances, partly due to generative AI systems such as ChatGPT, Gemini (formerly Google Bard), and DALL-E. In the manufacturing industry, AI applications and machine learning are already widely used by manufacturers, suppliers, logistics service providers, and retailers. These uses include cost-saving AI solutions and predictions for energy efficiency, robot programming, repairs, industrial robots, and forecasting trends in customer behavior and product demand.
The varied applications of AI systems across different sectors of the manufacturing industry, and the rapid pace of developments, raise questions about the consequences of their use, potential damage, and (civil) liability. As a result, since 2018, the European Commission has been working on developing a new regulatory framework for AI systems. In an earlier blog series, our Dutch product regulation & CE marking lawyers in Amsterdam, the Netherlands, discussed the proposal for the AI regulation, the ensuing legal requirements and obligations for market participants, and the revision of European product liability legislation.
In addition to this, the European Commission published a draft text on September 28, 2022, for the Proposal for a directive on non-contractual civil liability for artificial intelligence (AI Liability Directive). This blog delves further into its implications for claims based on liability for damages caused by AI-based products and services.
Background and objectives of the AI liability directive
Besides the benefits of AI applications, such as technological advancements and new business models across different sectors of the digital economy, the autonomous behavior and self-learning nature of AI systems also bring new risks and challenges.
The AI Regulation and EU legislation on specific products provide regulations to limit the risks to safety and fundamental rights and to prevent damage caused by AI systems. However, these regulations do not address the compensation for damage caused by the output of an AI system or the failure of an AI system to produce output.
Moreover, existing (national) civil liability rules for human actions, products, or other technologies are often insufficient to cover the autonomy, opacity, and complexity of AI systems. For instance, it can be very complicated to prove that a specific input from a potentially liable party caused a specific output of an AI system that led to damage.
It is also uncertain how the existing liability rules would be adapted to the specific characteristics of AI systems and applied in individual cases. This can lead to legal uncertainty and fragmentation among EU Member States.
The background and objectives of the AI Liability Directive include, among other things, preventing compensation gaps for cases involving the use of AI systems, increasing legal certainty, and creating a level playing field for manufacturers inside and outside the EU. Companies and individuals seeking compensation for damages caused by an AI system should receive the same level of protection as for damages caused by other technologies.
The AI Liability Directive also aims to modernize existing non-contractual civil liability rules and adapt them to the unique characteristics of AI systems. This is intended to ensure a high level of protection for businesses and individuals while also promoting innovation in the AI sector.
What are the key features of the AI liability directive?
Our lawyers often receive the question: What are the key features of the AI Liability Directive? In short, this directive makes it easier to provide evidence by introducing rules for the disclosure of evidence and rebuttable presumptions.
Disclosure of Evidence
To establish or substantiate a claim for damages, it is crucial to have access to information about the AI system that may have caused the damage. The AI Liability Directive makes it possible to ask the court to require a provider or user of an AI system to disclose relevant evidence. This can even happen before a claim for damages is filed. The court can also require a provider or user to preserve evidence.
Certain conditions apply to this. For instance, the rule only applies to high-risk AI systems, as these systems have specific requirements regarding documentation, registration, and information. The potential claimant must also provide sufficient facts and information to convince the court of the credibility of the claim. Additionally, the provider or user of the AI system must have the evidence or access to it and must be given the opportunity to voluntarily disclose it.
The court can only order the disclosure of requested information if it is necessary and reasonable to substantiate the claim. This prevents overly general requests and also protects the legitimate interests of the provider or user, for example, if trade secrets or confidential information are involved. The court can also take measures to ensure confidentiality. Finally, the provider or user has the right to appeal against an order to disclose or preserve evidence.
Rebuttable Presumption of Non-Compliance
The AI Liability Directive also introduces a rebuttable presumption of non-compliance. This presumption arises if a provider or user of an AI system fails to comply with an order to disclose or preserve evidence. The court will then assume that the provider or user has breached their duty of care towards the claimant.
The provider or user has the right to rebut this presumption. The aim of this measure is to stimulate the disclosure of evidence and expedite legal proceedings.
Rebuttable Presumption of Causality in Case of Fault
The AI Liability Directive also establishes a rebuttable presumption that there is a causal link between the fault of a provider or user of an AI system and the damage caused by the output of that system. This makes it easier for an injured party to claim damages.
If an injured party can demonstrate that a provider or user of an AI system made an error by breaching an obligation and that there is a causal link with the AI system’s output, the court may assume that this error caused the damage.
The injured party must prove that the provider or user made an error, for example, by breaching a duty of care under the AI Regulation. The presumption of causality only applies if it is likely that the error influenced the AI system’s output. The injured party must also prove that the AI system’s output (or lack thereof) caused the damage.
The defendant can rebut this presumption by proving that the damage was caused by another factor.
Status of the AI liability Directive Proposal
It has now been more than two years since the proposal for an AI Liability Directive was published. Since then, the proposal has been under discussion in the European Parliament and the Council. On December 14, 2023, they reached an informal agreement on the text, but further steps are still awaited.
Since the AI Liability Directive is linked to the AI Regulation and the revised Product Liability Directive, developments in these pieces of legislation are also relevant. During the legislative process for the AI Regulation, for example, there were extensive negotiations on the definition of an AI system and the classification of AI systems as high-risk. These key concepts were ultimately significantly adjusted. Additionally, the Product Liability Directive now also applies to software and includes provisions on damages due to data loss.
Due to these developments, the Council has asked additional questions of the EU Member States. The European Parliament has also conducted an impact assessment of the current AI Liability Directive text, which was published on September 19, 2024. This assessment proposes, among other things, to:
- Replace the directive with a regulation so that rules apply directly in all EU Member States, preventing market fragmentation;
- Expand the scope from AI to all software;
- Include general-purpose AI systems and other high-impact AI systems under the rules;
- Introduce a mixed liability framework balancing fault-based and strict liability.
Due to the criticism, the future of the AI Liability Directive is uncertain. There is still a need for rules on this subject, but the question remains in what form they will eventually be implemented. It is expected that by early 2025 there will be more clarity on the status of the AI Liability Directive proposal and possible next steps.
LEGAL ADVICE FROM A LAWYER SPECIALIZED IN ARTIFICIAL INTELLIGENCE
Are you looking for legal advice from a lawyer specialized in artificial intelligence or related topics? Feel free to reach out to our experienced team at MAAK Advocaten. Our dedicated Dutch lawyers are committed to delivering outstanding legal services tailored to your unique needs. You can easily connect with our law firm through our website, by email, or by phone.
Our friendly and professional team at MAAK Attorneys is ready to assist you. We can schedule a meeting with one of our specialized attorneys in the Netherlands. Whether you need a Dutch litigation attorney or a Dutch contract lawyer in Amsterdam, we are here to provide our legal services and achieve the best possible outcome for your situation.
Contact details
Remko Roosjen | attorney-at-law (‘advocaat’)
+31 (0)20 – 210 31 38
remko.roosjen@maakadvocaten.nl
The information provided on this legal blog is for educational purposes only and should not be considered as specific legal advice. While we strive to keep our content accurate and up to date, we do not guarantee its completeness or relevance to your specific situation. For personalized legal advice, we recommend consulting a licensed attorney. Please note that the content on this blog may change without notice, and we disclaim liability for any inaccuracies or omissions.