The European Commission Withdraws the Proposed AI Liability Directive

In a move that has taken many in the technology and legal sectors by surprise, the European Commission has withdrawn its proposed AI Liability Directive (AILD). The directive aimed to establish clear rules regarding liability for damages caused by artificial intelligence systems.

The decision, revealed in the Commission’s 2025 work programme, cites a “no foreseeable agreement” as the reason, suggesting a reassessment of whether an alternative approach is needed.

The reasons for the withdrawal remain unclear, but sources suggest that concerns from both AI developers and users played a significant role. These concerns likely revolved around the potential for stifling innovation and creating excessive legal burdens.

As per a Euractiv article, the Commission withdrew the AI Liability Directive after criticism of EU tech regulation by US Vice-President JD Vance at the AI Action Summit in Paris. “In this context, withdrawing the AI liability directive can be understood as a strategic manoeuvre by the EU to present an image of openness to capital and innovation, to show it prioritises competitiveness and show goodwill to the new US administration.”

In the words of European Parliament’s rapporteur, Axel Voss, “By scrapping the AI Liability Directive, the Commission is actively choosing legal uncertainty, corporate power imbalances, and a Wild West approach to AI liability that benefits only Big Tech.”

A Little Background

The AILD, proposed on September 28, 2022, aimed to complement the EU AI Act by modernizing the EU liability framework with new rules specific to damages caused by AI systems, ensuring that individuals harmed by AI systems receive the same level of protection as those harmed by other technologies. It sought to address the challenges posed by AI to existing liability rules, particularly regarding the difficulty in establishing causality.

One of the directive’s key provisions was the introduction of a rebuttable ‘presumption of causality,’ intended to ease the burden of proof for victims seeking to establish damage caused by an AI system. This would have allowed courts to presume a causal link between a breach of duty by an AI system and the damage suffered, effectively shifting the burden of proof to AI providers and users. The proposal also granted courts the power to order the disclosure of evidence about high-risk AI systems suspected of causing damage.

However, the directive faced criticism and concerns from various stakeholders. Insurance Europe, for example, called for the AILD’s withdrawal, arguing that it would create legal uncertainty, increase the compliance burden for businesses, and potentially confuse consumers about their rights. MedTech Europe, along with 11 other industry associations, called on EU policymakers to withdraw the proposed AILD arguing that this directive could complicate legal frameworks, hinder competitiveness, and deter investment in artificial intelligence (AI) innovation within the European Union. Some also worried about potential overlap or inconsistencies between the AILD, the AI Act, and the Product Liability Directive.

AI Liability in Flux: What’s Next?

Despite the withdrawal of the AILD, the revised Product Liability Directive remains in force as of December 2024. Under this directive, a party claiming damages may only need to prove that an AI system was defective, not necessarily that there was negligence. If an AI model embedded in a broader system is found to be defective, the entire system could be deemed defective, potentially exposing multiple parties to legal claims.

The European Parliament’s Legal Affairs Committee (JURI) had been considering substantial changes to the AILD’s scope, including the possibility of turning it into a regulation directly applicable in all member states to avoid discrepancies. The JURI committee requested a complementary impact assessment study from the European Parliamentary Research Service (EPRS), which suggested that the AI liability directive should extend its scope to include general-purpose and other ‘high-impact AI systems’, as well as software. The study also recommended transitioning from an AI-focused directive to a software liability regulation, to prevent market fragmentation and enhance clarity across the EU.

Wrapping Up

The Commission’s decision to withdraw the AILD leaves a gap in the EU’s regulatory landscape for AI liability. It remains uncertain whether the Commission will propose an alternative approach or leave the issue to be governed by national laws.

For insights into industry perspectives and the potential impact on AI innovation, contact MedQAIR to learn more about the implications of this decision.

Latest Regulatory News

Unlock Your Quick Guide to AI Act Compliance!

Explore AI-enabled SaMD requirements with our easy step-by-step guide.

Get Your Free eBook

Cookies help us improve your experience on our website. By using our site, you consent to the use of cookies as described in this policy.