AI liability

The EU’s approach to AI liability: From liability-centric reform to risk-based regulation

Paul Micallef Grimaud, Christina Scicluna, Andrea Grima

The European Union’s approach to governing artificial intelligence (AI) has undergone a significant inversion, shifting from a liability-focused stance to an ex ante compliance form of regulation.

Initially, the focus was firmly placed on civil liability reform. In 2017, the European Parliament’s Resolution on Civil Law Rules on Robotics had already highlighted the shortcomings of ordinary liability rules in dealing with damages caused by autonomous robots learning from their own variable experience and interacting with their environment in a unique and unforeseeable manner.

The expectation in 2022 was that these gaps were to be filled by the AI Liability Directive (AILD). However, this law never made it past the proposal stage and the draft AILD was formally withdrawn by the European Commission in the latter half of 2025, amongst a soul-searching exercise in relation to over-regulation that followed the United States’ stance towards de-regulation and simplification.

Where do we stand?

Establishing AI liability under traditional laws is not simple.

At a local level, tort and, or contractual damages may arise where the use of AI causes harm. Under our traditional laws, establishing and proving AI malfunction and who should bear responsibility for it would be the subject of complex case-by-case analysis where aspects of cause-and-effect and juridical relationships between the parties come to the fore.

Similarly, the principle of criminal intent under our Criminal law constitutes a hurdle to establishing criminal responsibility for a criminal act caused by an AI system.  Naturally, a distinction must be drawn between good faith actors and those that intentionally use AI to cause harm to others.

The EU’s primary focus is to reduce the impact of damages occurring when using AI systems through the application of the AI Act that caters for and attempts to mitigate risks (see our previous articles within this series for a more detailed explanation.) In short, the AI Act establishes a hierarchy of obligations based on the system’s potential to cause harm, opting for an ex-ante risk-based compliance model. The higher the risk, the more onerous the obligations.

Liability, however, remains to be governed by national laws, whether resulting from the transposition of EU Directives or otherwise and, in certain instances, such as the liability of providers of intermediary services, from harmonised EU laws such as the Digital Services Act (DSA).

The New Product Liability Directive (PLD): The Complementary Law

The new PLD must be transposed by all Member States by December 9, 2026. This brings a significant development to the existing legal framework on liability resulting from AI.

Firstly, the definition of a “product” and “manufacturer” under the current PLD has been broadened to include software and AI-integrated products, as well as third-party software developers that carry out unauthorised substantial modifications to software.

The aim of the new PLD is that there should always be one economic operator in the supply chain who can be held liable for the damage caused by the defective product. If several economic operators may be held liable, they may be jointly and severally liable.

While the manufacturer of an AI product, often the provider under the AI Act, would be the primary party held liable for damages caused by that defective product, deployers can inadvertently assume this liability.

Under the new PLD, any person may be reclassified as a manufacturer if they perform an unauthorised substantial modification that contributes to a defect, or if they present themselves as the manufacturer by applying their own name, trademark, or distinguishing features to the product.

In either scenario, this person, who may be the deployer under the AI Act, steps into the shoes of the manufacturer and assumes liability, shifting the legal risk from the original developer onto himself.

For deployers of AI systems, the PLD thus calls for care when modifying or branding third party AI products as their own.

In addition, the new PLD introduces procedural mechanisms to aid claimants overcome evidentiary hurdles, including a presumption of defectiveness when:

■ the defendant fails to disclose relevant evidence at the defendant’s disposal;

■ the claimant demonstrates that the product does not comply with mandatory product safety requirements laid down in the EU (including those laid down in the AI Act) or national law that are intended to protect against the risk of the damage suffered by the injured person; or

■ the claimant demonstrates that the damage was caused by an obvious malfunction of the product during reasonably foreseeable use or under ordinary circumstances.

Despite that there exist a number of significant concerns relating mainly to the limitation in scope of what harm is addressed under the PLD, this piece of legislation amplifies the need for regulatory compliance in order to avoid being caught under its strict liability notion. This sits nicely with the EU’s stance of harmonising AI regulatory compliance based on risk, through the AI Act. 

In sum, the EU’s framework prioritises an ex-ante risk-based compliance approach through the AI Act, leaving the mechanisms for relief to the new PLD and the application of general tort and contract law.

Providers and deployers of AI systems alike, must ensure that they understand their compliance obligations and liability exposure before placing the systems in the market or deploying them within their commercial activity.

Total
0
Shares
Previous Article
Bundesbank

German Ambassador Tania Beyer and Bundesbank delegation pay courtesy visit to CBM

Related Posts